• Home
  • News
  • Companies
  • Policy
  • Propose news
  • Contact Us
  • Login
  • General
  • Home NEW
  • News NEW
  • Companies
  • Trading map
  • IPv4 Broker
  • TOOLS
    • Whois
    • RBLS Check
    • Port Check
    • Ping Check
    • DIG Check
    • IP Check
    • BGP Check
    • Traceroute
    • Speedtest
    • Whoer
    • Uptime
  • TRADING
    • IPv4
    • Dedicated
    • Cloud/VPS
    • Backup
    • VPN
    • Colocation
    • Domain Names
    • Internet
    • SIP/VoIP
    • IPTV
  • Contact Us
Facebook improves connectivity in the African region with an undersea internet cable

Facebook improves connectivity in the African region with an undersea internet cable

As part of a plan to develop the Internet infrastructure, Facebook with its regional and global partners announced the laying of a new segment of the 2Africa Pearls submarine cable, which will connect three continents - Europe, Asia and Africa. Thanks to the new segment, the 2Africa cable system will be more than 45 thousand kilometers long - along with the Pacific Southern Cross, it will become one of the longest systems ever built in the world. Africa is today the least connected continent, as only a quarter of its 1.3 billion people are online. Therefore, back in 2020, Facebook announced the expansion of the African Internet infrastructure within the framework of the 2Africa project. The system will provide three times the network capacity of all submarine cables serving Africa today combined. 2Africa is expected to increase the continent's population with Internet access to 1.2 billion. With the addition of the Pearls segment, the system will unite them with another 1.8 billion people - in total, it will be, in one way or another, used by 3 billion people, or about 36% of the world's population. Africa Pearls will provide additional ground connection points in Oman, UAE, Qatar, Bahrain, Kuwait, Iraq, Pakistan, India and Saudi Arabia. Overall, 2Africa will provide a better connection between Africa and the rest of the world, as well as cross-connect between 33 countries in Africa, the Middle East and Europe. In the past year and a half, the importance of connecting to the Internet has become more evident than ever before, since literally billions of people worked and continue to work without leaving their homes during the pandemic, study remotely and maintain contact with loved ones. Facebook continues to invest in submarine cables in Africa and beyond. The company intends to continue to create innovative solutions such as fiber-optic "superhighways".

7 November 2021
Can blind spots be avoided when monitoring DCs?

Can blind spots be avoided when monitoring DCs?

A recent IDC report found that data center operators are currently facing three major challenges: reduced performance, downtime, and low bandwidth. Therefore, with the growing demand for data center services, it is more important than ever that operators can avoid blind spots that lead to business and reputational risk. So how can data center managers mitigate these risks? There are three major pitfalls in monitoring solutions to consider first. 1. It is impossible to trace everything Sensors provide information about current conditions only where they are located, however, conditions can vary significantly and as a result, operators receive an incomplete and therefore inaccurate picture. 2. You cannot control stability Monitoring systems provide information on current conditions, but cannot predict the consequences of failures. The only reliable way to monitor resilience without modeling is to deliberately induce a worst-case failure scenario, which is not possible. 3. You cannot monitor the future Sensors have no knowledge of future conditions, and the use of historical information for forecasting is a huge risk. Elimination of blind spots How can operators overcome these limitations? The answer is to combine tools that have information about the past and present with a tool that will shed light on how the environment will behave in the future. This requires the use of CFD technology. Virtual simulation of the entire data center, based on CFD, allows operators to accurately calculate the environmental conditions of the facility. Virtual sensors, for example, ensure that simulated data reflects data from real sensors. Therefore, the results obtained can be used to investigate conditions anywhere and in great detail. CFD also goes beyond temperature maps and includes modeling humidity and air velocity as well as pressure. For example, you can trace the movement of air flow from one location to another, which allows you to understand the cause of overheating problems. It is very important that this technology allows operators to simulate future sustainability. The CFD model provides information about any data center configuration, simulating changes in current configurations or in new, not yet deployed configurations.

3 November 2021
Influence of automation in DC on engineering personnel

Influence of automation in DC on engineering personnel

According to the Uptime Institute's Data Center Industry Survey 2021, there is still a shortage of skilled workers in the industry, and artificial intelligence is not expected to help solve this problem in the near future. As the demand for digital skills grows, the scarcity is becoming more acute. In addition, today's data center teams increasingly require both operational and development skills. The data center engineer must still be involved in preventive maintenance of equipment, but also have the technical knowledge to use mobile applications and be able to dig into any type of system and extract telemetry from it. Now everything can be viewed remotely, listened to and touched with the help of software - and this can be done even before the engineer enters the room. This goes beyond the traditional understanding of the structure of data centers and the use of software. The rise in automation will foster the use of AI and machine learning applications and tools, as well as tools capable of analyzing large amounts of data. This opens up opportunities for people who can work as analysts and data scientists who are able to use the latest tools to extract insights from massive amounts of data. What new job responsibilities should today's data center professionals prepare for? What kind of education do they need to get or what skills do they need to master? Data center professionals can handle the transition to new roles in several ways. First, they should be familiar with the underlying architecture of the various public and private clouds. Second, they must hone their skills in critical areas such as infrastructure as code, DevOps, intelligent analytics, data operations, container assembly and operation (Kubernetes), service meshes, and serverless technologies. With this knowledge, they will be well prepared to manage data centers now and in the future. In many ways, finding a unicorn now means finding someone who combines diagnostic thinking with the ability to work on a technology platform. I don’t think we’ll ever replace a data center engineer, but we will need to combine the two extremes - integrating software and training existing technicians who have attended vocational schools or are certified in electrical engineering and control, ventilation / mechanics and telecommunications / cable structure. It's about the mindset of a mechanic with data overlay.

30 October 2021
The frequency of accidents in DC has decreased and the sum of damage has increased significantly

The frequency of accidents in DC has decreased and the sum of damage has increased significantly

Data from the Uptime Institute's annual survey of data center disruptions and their causes shows fewer serious disruptions. Only 6 percent of respondents reported serious incidents (“Category 5”) in 2020. A year earlier, such accidents were reported by 11 percent of those surveyed. Interestingly, against the background of a decrease in the number of major accidents in data centers, financial and reputational damage from such incidents, unfortunately, is increasing, which, according to experts, is due to the continuously increasing dependence of business and government organizations on IT infrastructure. In particular, when commenting on recent significant accidents, more than half of the respondents who reported downtime in the past three years estimated the cost of accidents at more than $ 100,000 (with about a third claiming damages of $ 1 million or more). UPS circuit breakers and batteries should be carefully checked The Uptime Institute used survey data in conjunction with direct analysis of customer power use patterns to draw some conclusions about the causes of downtime. Local power outages continue to be the most common cause of power-related data center outages. However, the components of these systems, such as UPS batteries and circuit breakers, are often the most vulnerable points of failure. The organization's experts noted that in order to reduce the cost of building a data center, owners and operators of facilities are advised to install so-called Distributed Redundant Systems (DRS). The implementation of this concept in practice involves the deployment of two independent arrays of batteries that form the UPS, each of which is able to take on the entire load, and not just part of it. Human factor While most of the blame for data center downtime lies with technology, the notorious human factor also needs to be considered. A recent survey by Uptime Institute has shown that accidents caused by data center operators are quite common. In the survey, 42% of respondents said they have had a failure due to human error in the past three years. Among those surveyed, 57% cited improper work of data center personnel (non-compliance with procedures) as the cause of the accident, and the remaining 43% cited incorrect processes / procedures as the main reason for downtime. The study clearly shows that an increased emphasis on effective human resource management and continuous training of data center personnel will lead to better service efficiency for critical infrastructure and minimize the risk of downtime.

26 October 2021
The rise in popularity of IPv6 has not caught up with the size of the critical mass to concur with IPv4

The rise in popularity of IPv6 has not caught up with the size of the critical mass to concur with IPv4

According to analysts' forecasts, tens of billions of IoT devices will work in the world in the near future. The old IPv4-based addressing system, which was more or less capable of providing IP addresses to the “Internet of people”, is practically useless in the era of the coming “Internet of machines”. A new protocol, IPv6, aims to solve the problem. What does IPv6 give? The documents defining the new Internet protocol were issued by the Internet Engineering Task Force back in the mid-90s, and the official launch of the IPv6 protocol on an ongoing basis took place on June 6, 2012. Many companies started switching to it earlier, for example Google - since 2008. The protocol got the number "6" because the name IPv5 was reserved for an experimental real-time protocol that never came out. But it also did not disappear at all - many of the concepts embodied in it can be found in the MLPS protocol. Thanks to the 128-bit addressing scheme embedded in IPv6, the number of network addresses available in it is 2 to the power of 128. Such a large address space makes it unnecessary to use NAT (there are enough addresses for everyone) and simplifies the routing of data. For example, routers no longer have to fragment packets, it is now possible to forward large packets up to 4 GB in size. Implementation of IPv6. What is holding him back? 8 years after the official launch, the IPv6 protocol is gradually being implemented in the networks of telecom operators, as well as Internet service providers in different countries, coexisting with its predecessor - the IPv4 protocol. The most active use of the new addressing system is mobile operators and Internet providers. For example, according to the industry group World IPv6 Launch, T-Mobile USA uses IPv6 for almost 95% of the traffic volume, and Sprint Wireless - 89%. There are fans of progress in other countries - Indian Reliance Jio Infocomm (90%), Brazilian Claro Brasil (66%). Of the Russian operators, MTS is the highest in the ranking, at 83rd place, with 55%. In the country rankings of IPv6 penetration, according to Google, the leaders are Belgium (52.3%), Germany (50%), India (47.8%), Greece (47.6%). The United States has only 40.7%, less than, for example, Vietnam (43.1%). Russia has nothing to brag about (5.6%). However, China has 0.34% at all. When will IPv4 be "disabled"? Formally, no one will most likely disable IPv4. When the new protocol was launched in June 2012, the Dual Stack model was chosen, in which IPv4 and IPv6 networks operate in parallel. In most of the world, new IPv4 addresses "ran out" between 2011 and 2018, but today it is clear that such network addresses will be sold and reused for quite some time. However, as the number of transitions to the new protocol grows, operators and ISPs can be expected to start charging companies for IPv4 addresses while IPv6 becomes free of charge. Perhaps some services will become available only in the next generation networks. IoT networks will operate on IPv6. And, with the natural obsolescence of IPv4 equipment (and little new will be released), the world will gradually move away from IPv4.

4 September 2021
 Cases of accidents in DCs in 2021

Cases of accidents in DCs in 2021

Due to the Covid-19 pandemic, interruptions in the operation of broadband Internet and data centers are especially acute for ordinary users, government agencies and businesses. High-speed internet has been a lockdown lifesaver for millions of office workers, students, healthcare professionals and others who are increasingly using broadband for telecommuting / learning. In Oakland, a data center shutdown occurred due to rodents that damaged the cable At the end of January 2021, over 1,000 broadband Internet users in the western part of the New Zealand city of Auckland were disconnected from the network for more than a day. An investigation found that the internet outage was due to rodents damaging a 144-wire fiber optic cable belonging to the telecommunications company Chorus. The incident took place in the Massey area. According to Chorus, the cable break "was probably due to a rodent, which is related to similar damage seen elsewhere." Pakistani data centers go offline due to malfunction of submarine cable A similar incident happened in early February 2021 in Pakistan. Much of the local Internet infrastructure was interrupted by a damaged submarine cable near the Abu Talata settlement in Egypt. The corresponding trunk cable is used by Pakistani ISP Trans World Associates (TWA), which serves about 40 percent of the country. The exact information about the reasons for the incident was not disclosed. But the previously considered Asian country has already faced problems of a similar nature, when backbone fiber optic cables were inadvertently damaged and even deliberately cut. Interestingly, in January 2021, Pakistani data center operators faced a power outage that swept across the country. As a result, most of the Internet services served by local data centers went offline. Downtime of the AWS data center has led to numerous clients going offline, including Roku, Adobe On November 25, 2020, the Amazon Kinesis real-time streaming service ceased operations at the AWS data center campus in Northern Virginia. Kinesis is being used by other AWS tools that have also stopped working. This circumstance led to the shutdown of the services of a number of clients, including Flickr, iRobot, Roku. The system was fully operational by the evening of the same day. The company apologized for the problems, noting how important this service is to customers and their customers. Massive shutdown of server farms in Texas due to storm In the second half of February 2021, due to data center outages caused by a power outage in Texas, North American Internet users faced many popular services going offline. The state has put in place a plan to phase out power due to storm Uri. A poorly funded and poorly regulated regional power grid has lost tens of gigawatts of electricity, leaving millions of citizens without heating. There are about 2 hundred data centers operating in Texas. Most of the major local data center operators switched to diesel generators when faced with reduced power flow from the central network, while continuing to serve corporate customers and ordinary users.

31 August 2021

Facebook improves connectivity in the African region with an undersea internet c..

7 November 2021

Can blind spots be avoided when monitoring DCs?

3 November 2021

Influence of automation in DC on engineering personnel

30 October 2021

The frequency of accidents in DC has decreased and the sum of damage has increas..

26 October 2021

The rise in popularity of IPv6 has not caught up with the size of the critical m..

4 September 2021

Cases of accidents in DCs in 2021

31 August 2021

Latest news

Facebook improves connectivity in the African region with an undersea internet cable
Hosting

Facebook improves connectivity in the African region with an undersea internet cable

As part of a plan to develop the Internet infrastructure, Facebook with its regional and global partners announced the laying of a new segment of the 2Africa Pearls submarine cable, which will connect three continents - Europe, Asia and Africa. Thanks to the new segment, the 2Africa cable system will be more than 45 thousand kilometers long - along with the Pacific Southern Cross, it will become one of the longest systems ever built in the world. Africa is today the least connected continent, as only a quarter of its 1.3 billion people are online. Therefore, back in 2020, Facebook announced the expansion of the African Internet infrastructure within the framework of the 2Africa project. The system will provide three times the network capacity of all submarine cables serving Africa today combined. 2Africa is expected to increase the continent's population with Internet access to 1.2 billion. With the addition of the Pearls segment, the system will unite them with another 1.8 billion people - in total, it will be, in one way or another, used by 3 billion people, or about 36% of the world's population. Africa Pearls will provide additional ground connection points in Oman, UAE, Qatar, Bahrain, Kuwait, Iraq, Pakistan, India and Saudi Arabia. Overall, 2Africa will provide a better connection between Africa and the rest of the world, as well as cross-connect between 33 countries in Africa, the Middle East and Europe. In the past year and a half, the importance of connecting to the Internet has become more evident than ever before, since literally billions of people worked and continue to work without leaving their homes during the pandemic, study remotely and maintain contact with loved ones. Facebook continues to invest in submarine cables in Africa and beyond. The company intends to continue to create innovative solutions such as fiber-optic "superhighways".

7 November 2021
Hosting

Can blind spots be avoided when monitoring DCs?

A recent IDC report found that data center operators are currently facing three major challenges: reduced performance, downtime, and low bandwidth. Therefore, with the growing demand for data center services, it is more important than ever that operators can avoid blind spots that lead to business and reputational risk. So how can data center managers mitigate these risks? There are three major pitfalls in monitoring solutions to consider first. 1. It is impossible to trace everything Sensors provide information about current conditions only where they are located, however, conditions can vary significantly and as a result, operators receive an incomplete and therefore inaccurate picture. 2. You cannot control stability Monitoring systems provide information on current conditions, but cannot predict the consequences of failures. The only reliable way to monitor resilience without modeling is to deliberately induce a worst-case failure scenario, which is not possible. 3. You cannot monitor the future Sensors have no knowledge of future conditions, and the use of historical information for forecasting is a huge risk. Elimination of blind spots How can operators overcome these limitations? The answer is to combine tools that have information about the past and present with a tool that will shed light on how the environment will behave in the future. This requires the use of CFD technology. Virtual simulation of the entire data center, based on CFD, allows operators to accurately calculate the environmental conditions of the facility. Virtual sensors, for example, ensure that simulated data reflects data from real sensors. Therefore, the results obtained can be used to investigate conditions anywhere and in great detail. CFD also goes beyond temperature maps and includes modeling humidity and air velocity as well as pressure. For example, you can trace the movement of air flow from one location to another, which allows you to understand the cause of overheating problems. It is very important that this technology allows operators to simulate future sustainability. The CFD model provides information about any data center configuration, simulating changes in current configurations or in new, not yet deployed configurations.

3 November 2021
Influence of automation in DC on engineering personnel
Hosting
Influence of automation in DC on engineering personnel

According to the Uptime Institute's Data Center Industry Survey 2021, there is still a shortage of skilled workers in the industry, and artificial intelligence is not expected to help solve this problem in the near future. As the demand for digital skills grows, the scarcity is becoming more acute. In addition, today's data center teams increasingly require both operational and development skills. The data center engineer must still be involved in preventive maintenance of equipment, but also have the technical knowledge to use mobile applications and be able to dig into any type of system and extract telemetry from it. Now everything can be viewed remotely, listened to and touched with the help of software - and this can be done even before the engineer enters the room. This goes beyond the traditional understanding of the structure of data centers and the use of software. The rise in automation will foster the use of AI and machine learning applications and tools, as well as tools capable of analyzing large amounts of data. This opens up opportunities for people who can work as analysts and data scientists who are able to use the latest tools to extract insights from massive amounts of data. What new job responsibilities should today's data center professionals prepare for? What kind of education do they need to get or what skills do they need to master? Data center professionals can handle the transition to new roles in several ways. First, they should be familiar with the underlying architecture of the various public and private clouds. Second, they must hone their skills in critical areas such as infrastructure as code, DevOps, intelligent analytics, data operations, container assembly and operation (Kubernetes), service meshes, and serverless technologies. With this knowledge, they will be well prepared to manage data centers now and in the future. In many ways, finding a unicorn now means finding someone who combines diagnostic thinking with the ability to work on a technology platform. I don’t think we’ll ever replace a data center engineer, but we will need to combine the two extremes - integrating software and training existing technicians who have attended vocational schools or are certified in electrical engineering and control, ventilation / mechanics and telecommunications / cable structure. It's about the mindset of a mechanic with data overlay.

30 October 2021

Popular

How to become LIR in 7 days
IP-resources
How to become LIR in 7 days

There is an Internet infrastructure that includes switches, routers, which require a fairly large number of IP addresses, as well as server addresses. For some applications, for example sites that use the encrypted SSL protocol, a dedicated IP address is required per site, or a dedicated IP is required for each protected IoT. Thus, now many participants in the hosting industry are experiencing a shortage of IP resources, as well as companies that have their own internal sites. Considering that already all RIRs entered the phase of exhaustion, getting this resource became more difficult. Now IPv4 subnets are allocated only for new LIRs. After all, the 4 version of the protocol is universal and suitable for almost all users. Most companies tend to get class B addresses, since they best match their needs due to the optimal ratio between the number of networks and the number of hosts in them. The fact is that for corporate networks, a class A network with 16 million hosts provided more opportunities than required, and a Class C network could not meet the needs of a large company due to the small number of hosts that could be addressed. Becoming a LIR is not difficult and relatively cheap. You will need a copy of the company's documents, information about your infrastructure and 2 upstreams. You will also need to specify the information about the intended country of use, the purposes of use, contact details. This whole procedure takes no more than 2 hours. However, it will be processed by RIR employees much longer and will take 3-4 days. Payment and further procedure will take 2-3 days. more Thus, you get the membership and allocation of the block of IPv4 addresses (minimum 1024). On Pangnote, you can seek for advice from both brokers and other companies that can tell you how to go through all the required procedures quickly.

30 August 2018
Hosting

How to avoid mistakes when choosing a hosting

Everyone says that they learn from mistakes, but sometimes these mistakes can lead to very large losses. Therefore, your attention would like to introduce a few common mistakes that are allowed when choosing a hosting. Choosing the cheapest tariff plan Of course, you can not blame a person for the desire to save money. But still you have to be realistic about what you can and need to save. The amount of money that you could save by choosing one or another provider is minimal, so in the long run your costs will pay off if hosting is expensive. So it's better to stay away from anything that looks too cheap and too good to be true. On the other hand, you do not necessarily have to pay a lot for quality hosting. The choice of a knowingly expensive hosting does not always justify itself. Ignoring the terms of use Let's be honest, many of us tend to ignore the conditions, ordering some kind of service or product online. Probably, many people are familiar with the situation when a checkmark next to "I read and agree with the terms of use" is put simply on subconscious level. But sometimes this practice can be fatal. Always carefully read the terms of any contracts, thereby you can protect yourself from the occurrence of unpleasant situations. Big plans It is important to correctly evaluate how much traffic and resources such as memory and disk space you will need. If you do not have a truly unique and exactly fascinating idea for a new product or service, you are likely to have a slow start before you build an audience and attract enough money to justify the choice of a large hosting plan. It is quite easy to upgrade and promote your site after it is created, so starting work with a cheaper hosting plan is quite a safe thing, in the future you can just change it. Do not worry too much about what you will need in the future, and instead focus on how you can make your new site profitable in its present form. No technical support Now if you adhere to such a rule, then most likely, an online broadcast for your site can be difficult. Technical support should be, and it is important to have a connection with it not only by email or chat, but also by phone, and preferably 24x7.

17 September 2018
4 virtualization trends in 2019
Hosting
4 virtualization trends in 2019

According to MarketsandMarket, the virtualization market will grow over the next 5 years. As the main drivers of the market, experts point out the demand for specialized software and infrastructure for working with large amounts of data. Serverless technologies will evolve Serverless technologies allow developers to abstract from infrastructure management and focus on developing and deploying applications. This is possible thanks to automation: the provider company takes control of the virtual infrastructure that is needed to run and scale applications. In this case, you need to pay only for the time and resources that are used here and now. Of course, we don’t yet have to talk about mass use of technology in 2018, but the mentioned analytical data show that this market will develop steadily over the next 4–5 years. The popularity of NoSQL will grow Working with Big Data involves more than analyzing huge amounts of information. It also includes the processing of data that is regularly updated and comes from various sources. To solve these problems, more and more NoSQL databases are used. NoSQL is a relatively new technology, because for some time it experienced certain difficulties with implementation and encountered technical errors. For example, in 2015, 40 thousand NoSQL MongoDB databases were unprotected from attacks; hackers only needed to scan TCP ports to find vulnerable databases. Object storage will grow in popularity Object storage is becoming increasingly popular on the market. Files in such storage are accompanied by metadata that allow processing these files as application objects: documents, projects, photos, etc. high reliability of storage. Multi-cloud will combine workloads According to IDC forecast, in 2018, 85% of companies will switch to multi-cloud architecture. In addition to the need to meet the individual needs of organizations, the reason for the growing interest in multi-cloud is the growing popularity of data protection and automation - so they say in Juniper. Therefore, security has become one of the main strategic directions for Juniper's development in 2017–2018, and the company has been working on automation for many years.

2 February 2019

Hosting

Robotic integration in data centers in 2021: cases and forecasts
Hosting
Robotic integration in data centers in 2021: cases and forecasts

Since data center operators and owners are mostly cautious and conservative people, a genuine awareness of new technologies is always difficult for them. The idea of ​​letting a mechanized system run a data center could make some server farm operators / owners think twice before taking the first step in the right direction. Circumstances like these slow down progress significantly. Back in 2013, IBM engineers launched a pilot project to use a robot in a data center. They retooled the iRobot device (similar to the Roomba stand-alone vacuum cleaner), allowing it to move around the data center, monitoring temperature and other parameters. The project was quietly postponed "on the back burner". But despite these setbacks, robots quietly found jobs in and around the data center. DE-CIX, a German internet exchange company, has released a family of automatic patch robots, including machines called Patchy McPatchbot, Sir Patchalot and Margaret Patcher. The devices are based on XY manipulators and are capable of connecting fiber optic cables. These robots have already been used in a number of projects. For example, when migrating client equipment with moving connections without powering off IT equipment. Robots are also already being used in some hyperscale data centers for highly specialized tasks. In 2018, Google engineers shared information on the practice of using robots to destroy hard drives. In the data center of the Internet giant, essentially stationary industrial manipulators are used to help collect disks and place HDDs in a shredder. Alibaba engineers have created a more advanced system. The Chinese-developed second-generation Tianxun robot is powered by artificial intelligence and has the ability to operate without human intervention, automatically replacing any faulty hard drives. The entire replacement process, including automatic inspection, detection of a failed drive, ejecting a drive, and inserting a new one, is quick and smooth, taking four minutes. Facebook is also experimenting with robots. In 2020, it turned out that the Internet company has a team of specialists in robotics, which since 2019 has been designing "robotic solutions for automating and scaling the procedures for operating the infrastructure of Facebook data centers." Among the well-known projects are robots that can move inside the data center, observing the state of the environment (similar to the IBM project). Designing a data center without people What happens if data centers don't have to be designed to accommodate people inside and even create a passageway for a human operator. By removing the “human factor” from the equation, you can leave only small doors on the facade of the building for, say, very small robots with manipulators that allow you to perform the tasks of a human operator. Shrinking doors can increase physical security. The shape of data centers can also change. They can turn into large cylindrical structures placed next to office buildings. Cylindrical shape for easy ventilation. It may be possible to create a sealed data center without oxygen, which acts as a catalyst for corrosion of critical electronic components. Also, data centers will be able to operate at high temperatures.

30 July 2021
Hosting

Methods of rising the efficiency of data centers

A major concern for data center operators is to ensure that these facilities operate as efficiently as possible in terms of cost, energy and IT use - and for good reason. So where should data center operators start? Check out the tips from a number of data center experts. 1. Understand who you are and what you have The efficiency problem often boils down to human error. If management had a choice, it would keep people out of the data center. Nevertheless, the main task is to ensure the visibility of the DC's work. A detailed analysis of data centers often surprises: the average storage utilization for VMware environments is found to be 47%, memory 34%, processors 29%, storage 62%, and switch port capacity 78%, as people use switches that buy themselves. Access to file hosting data was not provided for six months - 72%. 2. Invest in monitoring and data management tools Making the most of a system means measuring and monitoring its health, whether you're talking about equipment, preventive maintenance, safety, communications, or environmental management. Identify strengths and weaknesses and reallocate resources to reduce TCO or improve resilience. You need to start with a thorough audit. However, many clients are investing in new technologies before they know exactly where they are today. Modern sensors allow accurate measurements of a large number of parameters. Look at air currents, floor cavities, thermal insulation, set temperatures, and electrical routes. Many large UPSs can handle 20% load. 3. Optimize cooling Solving airflow management problems typically results in a rapid increase in data center efficiency. But in colocation sites, it is not always easy to apply. If we can make sure that every cubic centimeter of cold air produced does not degrade the signal or short circuit when passing through the server or other equipment in the machine room, then we can be sure that we are only cooling what needs to be cooled. This greatly improves the efficiency of how the DC works. It is necessary to ensure that customers use the space that they rent from us with maximum efficiency, loading it by 70, 80 or 90%. Even a hot and cold air baffle system is often not the optimal solution, and improvements are needed to improve reliability and energy efficiency. 4. Take a wider view of things Attention to consolidation, harmonization and unification of operations in general can also improve efficiency. Some of this can be accomplished with appropriate datacenter infrastructure management platforms. Where operators have a single dashboard or low resource fragmentation, they should take advantage of these factors where possible. In general, maximizing system utilization can extend the life of the data center. Further advances in automation, AI and predictive analytics, including digital twins, will help more operators lower human costs and increase usage across all data center systems - but probably not in the next six to twelve months.

17 July 2021
Hosting

Why and how to buy drop domains?

Old domains or drop domains are domain names that users have registered, used, and have not renewed. Anyone else can register such recently vacated domains. Drop domains are needed to: Make a Private Blog Network (PBN). With the help of expired domains, you can create a network of web resources that will link to the promoted site. Google or another search engine will quickly begin to trust a new site, which is linked by many proven web resources with history. Create a satellite site. This is an additional thematic website that collects traffic and redirects it to the promoted resource. Specify 301 redirects. If there was a site with a similar subject on the expired domain, which was referenced by other thematic resources, you can make a 301 redirect from the drop domain to your current one and transfer all the link weight to the main site. Using drop domains will be useful for: Older sites that don't get good traffic or don't make it to the top of the SERPs. New sites that need a quick start. Sites that do not make it into the top 10 search results with good content and investments in promotion. How to select exempt domains Website owners are attracted to expired domains that reflect their brand name or domains with traffic, especially if they are linked from reputable sources. Look for drop domains with traffic by following our recommendations.  Check Google Enter your domain name into Google using a query in the format site: domain name to see how it is indexed in search engines. The more results the better. If there are no results, the domain has not been used for the site for a long time or the name has been filtered. Such a domain may not be suitable, as the profitable positions in the search results are lost.  Find out the reputation of the past site Analyze the reputation of the past site on the vacated domain using the online tools Safe Web and MxToolbox. They will show if the domain is on any blacklists.  Analyze the web archive Use the web archive to check content that was previously hosted on a domain. If you want to take a drop domain for your site, it should be relevant to it thematically. For example, if you have a site about cars, then old sites on a drop domain should have similar content. To check this, enter the title into the search bar on the web archive site. The result will show the saved copies of the pages for different dates, which can be viewed.

10 July 2021
The Chinese are developing an underwater data center
Hosting
The Chinese are developing an underwater data center

Following the example of colleagues from Microsoft, who developed and previously successfully tested the Project Natick underwater server farm, the engineers of the Chinese company Beijing Highlander created their own data center submarine. The Chinese published video and photographic materials from the presentation of the innovative data center in early January 2021. The presentation took place in the port city of Zhuhai, which is part of Guangdong province. Technical features of the Chinese underwater data center There are apparently four server racks inside the prototype. The specialists placed the racks in a sealed tank. Power supply and network connection between the data center and the shore are planned to be implemented using composite cables. Cooling of servers inside the underwater data center is organized using outboard sea water. Chinese engineers point out that the biggest obstacle to the development of data centers is electricity consumption, a significant part of which is used to cool the servers. Using seawater to solve this problem can reduce the energy consumption of subsea data centers by about 30 percent. Adopting the best practices of competitors Subsea data centers not only offer increased energy efficiency thanks to cool seawater, but they also enable the deployment of critical computing power in close proximity to customers in metropolitan areas without using valuable land. Microsoft engineers have previously decided to put this concept into practice by launching the Project Natick initiative. As part of a pilot project, specialists from the software giant tested an underwater data center for 12 racks in the coastal waters of Scotland for a two-year period. The servers were housed inside a sealed tank with a nitrogen atmosphere. The Chinese are going to further improve the sustainability of their own submarine data center by organizing electricity supply using renewable energy sources, including wind, solar and tidal power. As a result, the subsea data center will provide low operating costs, low delays in data transmission and reception, high reliability and safety.

7 July 2021
Hosting

Indian provider laid trunk cables to Singapore and Italy

Indian telecommunications giant Reliance Jio Infocomm Limited, which provides communications services under the Jio brand, intends to lay two high-capacity submarine Internet cables that will help satisfy the country's growing appetites for traffic consumption. One of the cables was named IEX, or India-Europe-Express. The starting point will be Mumbai: the line will pass through the Middle East and North Africa, reaching Italy. The connection points will be located in Oman, Saudi Arabia, Egypt and Greece. The second cable is called IAX - India Asia Express. The starting point is Mumbai, the ending point is Singapore. This line will pass through the Asia-Pacific region. The Maldives, Bangladesh, Myanmar, etc. were named as possible connection points. The capacity of the new lines will be up to 200 Tbit / s. The IAX channel is planned to be launched in mid-2023, the IEX channel - in 2024. The lines will provide support for such popular services as 5G communication systems, distance learning and teleworking platforms, the Internet of Things, etc. As we can see, India and the Middle East are becoming growth points for Internet users.

24 June 2021
Facebook builds its own optical backbone across US to minimize ping
Hosting
Facebook builds its own optical backbone across US to minimize ping

Facebook has announced the completion of the first phase of a new fiber backbone route linking its data centers in the Midwest and East Coast of the United States. The communications line runs off I-70 from East Indianapolis to the Indiana-Ohio border. The second phase of the project, which has already begun, involves laying a cable from western Indianapolis along I-40 to the Illinois border. In addition to improving connectivity between its data centers, the project will improve Internet access for residents and businesses in the state and surrounding areas, according to Facebook. Facebook is partnering with commercial network operators to connect their networks to their fiber backbones. The company intends to partner with local and regional service providers in Indiana. This will give them enough network bandwidth to expand broadband access to people - especially in underserved rural areas near our fiber networks. ” New fiber paths between data center clusters are important not only to increase throughput, but also to build headroom to ensure data continuity in the event of a failure. Facebook and other hyperscalers are investing in their own routes, both land-based and submarine (connecting continents) to meet their growing needs for bandwidth between data centers. Investments in fiber backbone infrastructure from large companies are also helping local and regional ISPs improve their networks once they gain access to this new infrastructure. Facebook's data centers in the Midwest are located in Iowa and Nebraska, and its East Coast data center cluster consists of facilities in Ohio, Virginia, and North Carolina.

5 May 2021

Hardware

Network connectivity issues become the leading cause of data center downtimes
Hardware
Network connectivity issues become the leading cause of data center downtimes

According to the Uptime Institute, network problems are bypassing power problems in the top-ranking data center outages rankings as enterprises strive to move more of their workloads to the cloud. The third annual data center outage analysis attempts to shed light on the frequency and causes of server farm downtime over the past 12 months. The report claims that the failure rate appears to have dropped markedly, citing the coronavirus pandemic as one factor. A direct consequence of the quarantine and stay-at-home restrictions imposed by governments due to the pandemic is that many companies have temporarily shut down or scaled back their operations, possibly resulting in fewer data center outages. In addition, in line with the Uptime Institute's guidelines for data center operators released at the start of the pandemic in March 2020, many firms have also decided to postpone their data center maintenance and modernization projects, which tend to be a source of disruption. Looking at global enterprise-class IT infrastructure more broadly (including private data centers, colocations, and public clouds), the Uptime Institute's annual survey provides a consistent picture over the years: power problems are invariably the single biggest cause of outages. However, the Uptime Institute now expects more outages to be caused by network and software / IT problems, and less by power problems. This is partly due to the fact that the frequency of failures related to power supply is steadily decreasing, as operators take measures to improve the infrastructure of their facilities and train personnel to take preventive measures against such incidents. At the same time, network outages are becoming more common due to the widespread shift in recent years from disparate IT services running on specialized hardware to a model where IT systems are distributed and replicated across multiple sites linked between network connections. Corporate data centers are typically operated by one or two telecom providers, but companies are increasingly looking to ditch such facilities in favor of colocation or public clouds to handle their workloads, so the risk of network problems harming their operations is increasing.

13 July 2021
Hardware

Google begins development of its own Systems on Chip to reduce power consumption of large-scale computing platforms

Google is starting to develop its own Systems on Chip (SoC), which is expected to help build large-scale computing platforms with lower power consumption. Amin Wahdat, Vice President of System Infrastructures at Google, spoke about the initiative in an open letter. Building a server-grade SoC will be Google's next step in improving the hardware of its data centers. Mr. Vahdat notes that until recently, the improvement of the hardware base was in the direction of integrating components into motherboards. However, in this case, the elements are still separated from each other by "connection inches". In the case of the SoC, this problem will be eliminated: numerous functions can be integrated into a single chip. Or several chips can coexist in one package. To implement the new project, Google hired Uri Frank, a well-known Intel developer. From 2016 to 2020, he oversaw the development of Core processors. In addition, Mr. Frank served as Vice President of the Intel Platform Engineering Group. It is not yet clear what architecture the future Google development SoCs will use and what exactly they will be used for. Considering that we are talking about energy efficient solutions, we can assume that ARM cores will be used. So far, self-developed Arm SoCs have been massively deployed only in Amazon - two generations of Graviton are offered as part of instances, and serve as the basis for other AWS services. And the AWS infrastructure itself relies heavily on Nitro. According to rumors, Microsoft is also busy building its own Arm SoCs for servers and mobile devices. Other hyperscaler announcements include Alibaba's XuanTie 910, but this RISC-V chip is still focused on IoT and edge computing. Alibaba also has the Hanguang 800 AI accelerator. But in this segment, Google has long had a TPU solution for four generations, as well as a Coral edge solution. Finally, the company has a crypto-SoC OpenTitan.

11 May 2021
How it works and why to use S3 Object Storage
Hardware
How it works and why to use S3 Object Storage

S3 storage is a facility storage service offered by cloud service providers. The main advantage of the solution is the ability to store files of any type, any size, with a high level of reliability and availability. The principle of working with S3 storage comes down to creating containers and adding the necessary files there, which are represented as objects. Thus, everything that falls into the container can be viewed, moved or deleted. The containers themselves, if necessary, can also be removed. An important point: in addition to the objects themselves, metadata is stored in object storages that determine the properties of the object and a unique global identifier in the form of an assigned address. These attributes are stored in a flat address space, eliminating the problems encountered when working with a hierarchical file system based on complex file paths. It is noteworthy that one object may contain heterogeneous metadata that characterizes it in the most detailed way. For example, it can be an audio file with the specified metadata in the form of an artist, song, album, and other information. In the future, file metadata is indexed, which greatly facilitates and significantly accelerates the search for the desired objects by the specified criteria.

27 December 2019
How to combine HCI with your existing IT infrastructure
Hardware
How to combine HCI with your existing IT infrastructure

The first and easiest way to integrate existing equipment, applications, and services with new HCI equipment is through the use of their built-in APIs. Arsenal of such tools is really equipped with some of the HCI models. This allows developers to build direct connections with the HCI platform. For example, the NPI HCI platform SimpliVity provides a REST API that can be used to configure HCI equipment in command line mode. Using this API, you can also service applications in the HCI environment. Similar APIs are also available in the arsenal of Nutanix Prism and Cisco HyperFlex management systems. Another way is to use third-party tools. Some vendors integrate them into their HCI products in advance. For example, Dell EMC has deeply integrated its VxRail HCI solution with VMware management tools. DataOn Storage did the same, delegating the management and administration functions of the Windows Admin Center. Choose the Right Application for HCI Despite its many advantages, the HCI platform is not an ideal choice for launching absolutely corporate applications. Its hardware configuration is predefined, and this alone creates certain limitations. If corporate application is primarily associated with the launch of conventional virtual machines, then this feature of the HCI platform is not of fundamental importance. Be creative with HCI equipment Advertising often mentions that HCI provides built-in backup support. But if you creatively evaluate this statement, you can assume that HCI equipment can also work well for disaster recovery. By building data warehouses on the HCI platform, you can immediately kill two birds with one stone: get increased reliability of data storage due to built-in backup support, as well as provide a very fast recovery of running virtual machines. HCI equipment should be integrated with the rest of the enterprise IT ecosystem HCI systems can work autonomously. They can be "put in the corner" and not touched. But they can become a significant part of the overall corporate IT ecosystem, if you think about their integration. The list of common tasks may include monitoring and allocation of system resources, orchestration of workload, etc. Therefore, it is necessary to audit the software tools used by the company and figure out which of them have “junction points” with established HCI models. Use HCI to reduce the complexity of implementing complex application systems The whole headache of introducing new applications is being transferred to the IT departments of companies. The use of the HCI platform allows them to reduce the complexity of operating the infrastructure and servicing workloads.

18 December 2019
Hardware

DARPA Creates New Networking Approaches to Increase Distributed Application Performance in 100 times

Dr. Jonathan Smith, program manager at the DARPA Information Innovation Office, believes that the true bottleneck for processor bandwidth is the network interface used to connect the machine to an external network, such as Ethernet, which severely limits processor load capacity. Today, the throughput of a network built using modern technology is about 10 ^ 14 bits / s, and data is processed together with a speed of about 10 ^ 14 bits / s. The current stacks provide the application throughput only from 10 ^ 10 to 10 ^ 11 bits / s. To speed distributed applications and narrow the performance gap, DARPA has launched Fast Network Interface Cards (FastNIC). Creating a network stack is costly and complex - from maximizing the connections between hardware and software to processing application interfaces. Strong commercial incentives aimed at cautiously gradual advancement of new technologies in many independent market segments discouraged anyone from using the stack as a whole. To help substantiate the need for such a significant revision, FastNICs will select a demonstration application and provide it with the necessary hardware support, an operating system, and application interfaces that provide overall system acceleration thanks to faster network adapters. Part of the FastNIC program will focus on developing hardware systems to significantly improve the speed of the aggregated raw server data channel. In this area, researchers will design, implement, and demonstrate 10Tbps network interface hardware using existing or planned hardware interfaces. Hardware solutions must connect to servers through one or more standard interface points, such as I / O buses, multiprocessor interconnect networks, and memory slots, in order to support fast migration to FastNIC technology. A second area of ​​research will focus on developing the system software needed to manage FastNIC hardware resources. To achieve a 100-fold increase in throughput at the application level, system software must ensure efficient and parallel data transfer between network equipment and other system components.

17 December 2019
Creating GPU-based systems for the clouds
Hardware
Creating GPU-based systems for the clouds

GPU developers also release hardware solutions that increase the performance of vGPU clusters. For example, last year Nvidia introduced a new graphics processor for Tesla T4 data centers based on the Turing architecture. Compared to the previous generation GPU, T4 performance for 32-bit floating-point operations increased from 5.5 to 8.1 teraflops (p. 60 of the document). The new architecture has increased the performance of the accelerator due to the separation of work with integer operations and floating-point operations. Now they run on separate cores. Also, the developer combined the shared memory and L1 cache into one module. This approach increased the cache throughput and its volume. T4 cards are already used by large cloud providers. In the near future, experts expect an increase in demand for cloud graphics accelerators. This will be facilitated by the development of hybrid technologies that combine GPUs and CPUs in one device. In such integrated solutions, two types of cores use a common cache, which speeds up the transfer of data between graphics and traditional processors. Special load balancers are already being developed for such chips, which increase the vGPU performance in the cloud by 95% (slide 16 of the presentation). But some analysts believe that virtual GPUs will be replaced by a new technology - optical chips in which data is encoded by photons. On such devices, machine learning algorithms are already running. For example, the LightOn startup optical chip completed the task of transfer learning several times faster than a regular GPU - in 3.5 minutes instead of 20. Most likely, large cloud providers will be the first to introduce new technology. Optical chips are expected to accelerate the training of recurrent neural networks with LSTM architecture and direct distribution neural networks. The creators of optical chips hope that such devices go on sale in a year. A number of major providers are already planning to test a new kind of processor in the cloud. It is hoped that such solutions will increase the popularity of cloud high-performance technologies and the number of companies that process big data in the cloud will increase.

14 December 2019

IP-resources

The cost of IPv4 addresses almost doubled in 3 quarters
IP-resources

The cost of IPv4 addresses almost doubled in 3 quarters

As the price of an IPv4 address continues to rise, attackers may attempt to steal unsecured IPv4. Companies that own or manage blocks of IPv4 addresses are advised to monitor the increased risk of hacking attempts in the coming months. Since the price of an IPv4 address has reached an all-time high, attackers can try to steal unused or unsecured IPv4. Back in 2020, prices peaked in the summer around $ 23 - $ 24. But in the first quarter of 2021, prices hit $ 32 in February, nearly 30% above their 2020 peak. In fact, the lowest prices this year are about the same as the highest last year at $ 23-26. While some marketplaces provide IPv4 block owners with a legitimate way to sell or lease unused addresses, there is also an underground market for cybercriminal groups for these operations, taking control of IPv4 addresses from companies that have failed to provide them with the necessary security.  Although incidents of IPv4 theft are not as common as other forms of cybercrime, they have been occurring consistently for many years. Criminals register expired domains for companies that once owned a block of IPv4 addresses, or create new companies with similar names, and then contact organizations such as ARIN, APNIC, RIPE NCC, AFRINIC, or LACNIC to intercept legitimate IPv4. Taking control of IPv4 addresses has also become easier because some owners do not always verify the legitimacy of the transfer of IPv4 blocks.

26 July 2021
RIPE NCC will now allocate only return blocks
IP-resources
RIPE NCC will now allocate only return blocks

The global exhaustion of IPv4 addresses has been expected for quite some time. And this day has come. The RIPE Regional Internet Registrar, which operates in Europe and Asia, has reported the exhaustion of the last block of IPv4 addresses. Thus, all 4.3 billion network addresses are distributed, which means the impossibility of allocating them to providers, companies and large suppliers of network infrastructure. Of course, there are still addresses that can be issued repeatedly. However, RIPE specified that their number is not enough to maintain the current dynamics of the development of the global network. According to the registrar, the needs of operators include millions of addresses. However, neither the introduction of CG-NAT translation nor resale will produce a long-term effect. These are all temporary measures. The same palliative is the measure for issuing addresses from the pool of returned blocks. They will be issued only to LIRs that previously did not receive IPv4 addresses and only in blocks of no more than 256 (/ 24) addresses. The real alternative is switching to IPv6. However, this will require upgrading the network infrastructure. At the moment, the share of IPv6 is approaching 30%, in the leaders of Europe and the USA. But the countries of the post-Soviet space have little coverage of IPv6, although certain actions are being taken in this direction.

31 December 2019
IP-resources

What IPv4 broker do and how to transfer subnet?

The rules of any RIR are developed by the Internet community in order to provide those companies that need IPv4 addresses with the opportunity to continue working with the IPv4 protocol, despite the limited number of available addresses of the specified protocol. And according to them, organizations that have free pools of IPv4 addresses can transfer them to other organizations. Over the past few years, some organizations have accumulated a large number of IPv4 addresses and now such organizations may have more than necessary IPv4 addresses. Returning unnecessary addresses is carried out free of charge, since the companies themselves received such addresses for free too. In addition, organizations can give unnecessary addresses to other companies that have documented the need for IPv4 addresses. The transfer of address pools must be in accordance with the rules that are reflected in the policies of each of the RIRs. Such actions will help to make the best use of the remaining addresses. So far, it is possible to rent or buy IPv4, reissuing to another company. The basis for the sale transaction is an IP transfer agreement between LIRs. It is to meet these needs that IPv4 brokers work. The most in-demand IPv4 broker service is trench pool assistance. Stages of IPv4 pool transfer: • Consideration of your request for IPV4 addresses (size, class, minimum and maximum price) • Search for a seller subnet, intermediary financial negotiations • Establishment and control over the transfer of RIR documents, preparation for transfer (removal of route facilities, verification of administrative correspondence) • Creating a secure system of financial flow, through a trusted third party or a direct contract • Financial transaction follow-up, initialization and management with RIPE, subnet transfer • Closing a transaction when the RIR database is updated with your data

2 March 2019
What problems can you face if you need to buy ipv4 space?
IP-resources
What problems can you face if you need to buy ipv4 space?

In the face of the exhaustion of IPv4, the only way to continue the development of the Internet will be to use IPv6 addresses. However, this is now fraught with organizational and financial difficulties, which can be considered in a separate article. Here we talk about IPv4. In the short term, the following options are possible: • Reallocation of existing IPv4 addresses. This applies to the largest ranges allocated in the early years of the development of the Internet, most of whose address space is not used. True, it is not always possible to redistribute them, since it involves organizational and legal aspects, however some organizations (Stanford University, the US Department of Defense and some others) have already done so. In addition, in theory, regional registrars may require the LIR to use address space usage data in order to return unused blocks to the common pool. • Using NAT-technology providers. This is now common in some countries. However, this approach is associated with a number of inconveniences for the end user and is not always a convenient option; • Resale of IPv4 addresses. On the last paragraph we will dwell in more detail. Since IP addresses were always provided free of charge (although LIR pays membership fees), no one thought about whether they could be bought or sold. However, the harsh reality forced to consider and such an opportunity. The regional registrars regulations contain a corresponding clause regarding the redistribution of IP addresses between the LIRs. According to him, the organization that transfers IP addresses should justify the need for them just as in the case when a request for additional addresses occurs. The emergence of black market IPv4 addresses is already a reality. ICANN representatives themselves agreed that it exists and how to control it remains a question. Meanwhile, a platform appeared on the Internet for buying and selling and renting IPv4 addresses. According to brokers, the final cost of an IPv4 address ranges from 15 to 20 dollars.

2 March 2019
IP-resources

Reasons for hosters to buy IPv4 address space now

IPv4 is the first major version of the protocol that is used to make a huge part of the Internet work. Of course, 4.3 billion IPv4 is quite a large number, but still it is not enough for all devices and needs around the world. IPv6 was developed to solve this problem. It uses a 128-bit address space. The total number of addresses will be 2 to 128 degrees, and it should be enough for many decades to come. And the only way to continue the development of the Internet is to use IPv6 addresses. However, this is now fraught with organizational and financial difficulties, which can be considered in a separate article. Despite the fact that almost all registrars gave available blocks, you can still buy and sell individual addresses. You can try to find the address ranges issued earlier, break into smaller blocks and distribute again. However, finding them is not easy, as their registry was not kept. One example is the MIT case. In 2017, MIT found that they had 14 million extra IP addresses that were not used. 8 million, it was decided to sell. However, this approach has a certain disadvantage. Uncontrolled bulk resale of IP addresses can lead to fragmentation of patterns and an increase in routing tables. This can cause problems in the work of routers with limited memory resources. Another solution is to use NAT. The NAT method allows you to convert the IP addresses of the transit packets: replaces the local address with the public one, writes it in the packet that goes to the web server, and returns it to the device. The largest number of NAT ports is 65,000, which means that the same local addresses can be turned into one public one. In this case, only the router is visible in the network structure, and the devices themselves are hidden. However, NAT has a number of drawbacks. For example, protocols that appeared before this method (for example, FTP) may work unstable via NAT. In addition, if all employees of the company decide to go to one site, the server can take it for a DDoS attack because the request is made from one public address and block access for all devices with this IP. Therefore, in terms of the fact that the widespread use of IPv6 will not soon make the decision to buy IPv4 address space for a hoster is now the most relevant.

1 March 2019
Experts doubt in the need of transition to IPv6 until 2023
IP-resources
Experts doubt in the need of transition to IPv6 until 2023

The crisis of free IPv4 addresses exhaustion divided the network community into two camps: conservatives, IPv4 supporters, and innovators actively promoting for IPv6. A representative of the leadership of Brocade, said that many customers of the company faced problems of migration to the new standard protocol. IPv6 is not compatible with IPv4, so for the transition it is necessary to replace the old equipment with new ones. Internet providers participating in the event promised that they would have at least 1% of IPv6 users connected to the launch date. Cisco and D-Link say their goal is to transfer all their products to IPv6 for routing. The transition from IPv4 to IPv6 may cause problems for some organizations, but large ISP and Internet companies, including Facebook and Google, have already invested a lot of power in IPv6 distribution. What it is all about? The IPv4 addressing scheme is limited to about 4.3 billion available addresses, most of which are currently assigned. For an exponentially growing fleet of devices connected to the Internet, the situation can turn into a serious problem and the inability of many global network subscribers to access the web. IPv6 should solve the problems, however, there were also opponents of this migration. Conservatives claim that the current addressing system can be used forever, the use of NAT will help, and there are no obvious prerequisites for the transition to Ipv6. Innovators believe that it is not economically feasible to continue to use old technologies, moreover, even the use of NAT cannot solve the problem completely. According to them, in the next 18 months, the Internet should fully switch to the use of IPv6. And although some manufacturers propose using both transport protocols at the same time, there are still many unresolved issues on this issue, the main one of which is the price of such a decision. Mr. Stewart said that Brocade has prepared a special NAT gateway for translating IPv4 to IPv6, which will help equipment that is incompatible with IPv6 hardware to function normally even after a full-scale transition. However, according to Brocade, more and more companies prefer IPv6.

29 December 2018
General
Home
News
Companies
Policy
Trading
IPv4
Dedicated
Cloud/VPS
VPN
Colocation
Domains
Internet
SIP/VoIP
IPTV
UK Head Office
Pangnote Limited
United Kingdom, London
helpMe@pangnote.com

...

Do you like cookies? 🍪 We use cookies to ensure you get the best experience on our website. By using our website you agree with our policy!

I AGREE