Losses and the number of cyber attacks on the companies are constantly growing
Hosting

Losses and the number of cyber attacks on the companies are constantly growing

The business faces hacker attacks regularly every year, month, day. Of course, you will not hear from companies official statements. It is unlikely that anyone will want to admit their own vulnerability and undermine the trust of customers A recent study by Radware found that at least one in five companies faced information threats every day. If you look through the statistics for 2017, it turns out that these companies were 62% less The goal is reputation Of great concern is also the reduction in the number of companies reporting about hacker attacks once or twice a month. Comparing this fact with the previous statistics, it can be concluded that companies are now exposed to attacks more often. Modern hackers can hardly be reproached for lack of fantasy. Their approaches have become more diverse, and the attack vectors also change periodically. For example, DDoS attacks can be made in two formats, volumetric and non volumetric. At the same time, their goal remains unchanged as much as possible to overload the server on which the necessary application is running, with requests to make it unavailable for other users. In 2018, the number of network DDoS attacks increased by an average of 12%. At the network level, the amount of unwanted ICMP traffic has increased by 44%. Each of the 13 organizations have been attacked by VoIP. The cost of attacks and threats in 2019 One of the pressing issues of interest to many is the cost of the consequences of a single cyber attack. In addition to reputational damage and loss of customer confidence, an average of one incident costs the company $ 1.1 million. This amount is 52% higher than the figure received by Radware in the survey for 2017. At the moment, the biggest concerns of companies are caused by such potential threats to information security as application vulnerabilities (34%) and permanent denial of service (19%). Interestingly, in different regions of the company voiced various concerns about the consequences of incidents. For example, in North America, data breaches are most feared, and in the EMEA region, including Europe, the Middle East and Africa, they are mostly worried about the damage that a hacker attack can do to a company’s reputation. Statistics on cyber incidents in the information era are unlikely to ever please us with low rates. However, the fact that more than 60% of companies, realizing all the potential information risks, already have a plan in case of force majeure, gives hope that the attackers will not be able to achieve their ignoble goals.

16 February 2019
19
Hosting

What DC should expect from data sphere by 2025?

Experts identify three main locations where digital information is generated and processed today: the core, the edge and the end points. The collected data produced in this way constitute the global datasphere, which, according to preliminary estimates, should reach 175 Zettabytes by 2025. Kernel, edge and endpoints The core of the datasphere in IDC is called data processing centers. And the growth in the amount of data in the core is primarily due to the growing popularity of all types of clouds (private, hybrid and public). The core also includes corporate data centers, for example, those through which it manages transmission lines and telephone networks. The edge category includes secure corporate servers and devices that are not located in data centers. This includes server rooms, local servers, cell towers and small data centers located in regions to speed up data transfer. Endpoints include end-user devices such as personal computers, smart phones, industrial sensors, connected cars, and so on. Permanent interaction with data According to IDC, today more than five billion users interact with data in some way. It is expected that by 2025 their number will increase to six billion, that is, make up 75% of the total world population. In the same year, each person using the connected devices will interact with the data 4900 times a day or every 18 seconds. Such heavy statistics is due to the fact that by 2025, thanks to the development of the Internet of things technology, there should be about 150 billion connected devices in the world. Most of them will generate real-time data. And if in 2017 the share of such data was 15% of the entire datasphere, in 2025 it will increase to 30%. The data-driven industries Nowadays, data-driven approaches are most often used in such areas as financial services, industry, health care, media and entertainment. Manufacturers can extend the product life cycle and reduce the number of failures in the product by analyzing the patterns of user behavior thanks to sensors embedded in the connected devices. Conclusion One zettabyte is equal to a trillion gigabytes. According to forecasts, until 2025, the entire global dasphere will be calculated by 175 zettabytes.

16 February 2019
6
What is now more popular in the market for containers or virtualization?
Hosting
What is now more popular in the market for containers or virtualization?

A container is an isolated application runtime environment. It includes the application itself, executable files, configuration files and libraries, collected in a single package. This technology greatly facilitates the migration of applications from one environment to another. Unlike virtual machines, each of which has its own OS, the container shares the kernel of the operating system with other containers. Many data centers offer both containers and virtualization for the users.  Today, the confidence of companies in containers is gradually increasing. Two fifth of the companies surveyed in the framework of the study announced their intention to switch over to containers in the near future. However, a massive rejection of virtual machines is hardly possible. Kubernetes is first Interestingly, 59% of survey respondents reported that they deploy containers in virtual machines. They also use public (39%), private clouds (38%) and dedicated servers for these purposes. The most popular container orchestration solution as of 2017 was Kubernetes. According to SurveyMonkey and Portworx, about a third of the respondents use this system. With Kubernetes, you can virtualize the underlying IT infrastructure in which containers are already deployed, without a hypervisor; it provides tools for organizing and updating thousands of containers. The Amazon ECS container management service in the popularity rating was only the third. Although 70% of survey participants Diamanti believes that AWS should win first from switching to containers among vendors. Conclusion Although they began to talk about containers more often and more willingly, companies are still at the very beginning of their application. Virtual machines exist and are used for 15 years. They are so ingrained in technological everyday life that they are unlikely to disappear anytime soon. However, quite a few IT professionals are already considering containers as a good opportunity to reduce their dependence on them, and at the same time provide greater flexibility for corporate applications.

16 February 2019
3
How to avoid mistakes when choosing a hosting
Hosting
How to avoid mistakes when choosing a hosting

Everyone says that they learn from mistakes, but sometimes these mistakes can lead to very large losses. Therefore, your attention would like to introduce a few common mistakes that are allowed when choosing a hosting. Choosing the cheapest tariff plan Of course, you can not blame a person for the desire to save money. But still you have to be realistic about what you can and need to save. The amount of money that you could save by choosing one or another provider is minimal, so in the long run your costs will pay off if hosting is expensive. So it's better to stay away from anything that looks too cheap and too good to be true. On the other hand, you do not necessarily have to pay a lot for quality hosting. The choice of a knowingly expensive hosting does not always justify itself. Ignoring the terms of use Let's be honest, many of us tend to ignore the conditions, ordering some kind of service or product online. Probably, many people are familiar with the situation when a checkmark next to "I read and agree with the terms of use" is put simply on subconscious level. But sometimes this practice can be fatal. Always carefully read the terms of any contracts, thereby you can protect yourself from the occurrence of unpleasant situations. Big plans It is important to correctly evaluate how much traffic and resources such as memory and disk space you will need. If you do not have a truly unique and exactly fascinating idea for a new product or service, you are likely to have a slow start before you build an audience and attract enough money to justify the choice of a large hosting plan. It is quite easy to upgrade and promote your site after it is created, so starting work with a cheaper hosting plan is quite a safe thing, in the future you can just change it. Do not worry too much about what you will need in the future, and instead focus on how you can make your new site profitable in its present form. No technical support Now if you adhere to such a rule, then most likely, an online broadcast for your site can be difficult. Technical support should be, and it is important to have a connection with it not only by email or chat, but also by phone, and preferably 24x7.

17 September 2018
476
Hosting

What is the reason for the global increase in the number of hyper scalable data centers?

In April 2017, there were 320 hyper-scalable data centers in the world, and in December their number was 390. Synergy expects the number of hyperscale data centers to increase to 400 in the first quarter of 2018 and reach 500 in 2019. Location and pricing According to a Synergy study, most of hyper-scalable data centers are located in the United States (44%), China accounts for 8%, and Japan and the United Kingdom at 6% each. In Australia and Germany, there are 5% of data centers, and in Singapore, Canada, Brazil and India, they have deployed 3% of the total number of such data centers. According to Deloitte and Gartner, in 2016, global spending on traditional infrastructure amounted to 1045 billion dollars. However, this figure is decreasing: according to forecasts, in 2018 it will fall to $ 1005 billion. At the same time, the demand for cloud service providers will increase, and spending on cloud solutions from 2016 to 2018 will increase from 361 to 547 billion dollars. For example, in March last year, Google announced that their spending on expanding its data center network over the past three years amounted to $ 30 billion. And Microsoft and Amazon invest $ 10 billion in data center infrastructure annually. Opposite trend: micro DC Hyperscale data centers are mainly used for the development and training of artificial intelligence, work with IoT and in other areas where you need to process a large amount of data. However, not all companies will benefit from the development of hyper-scalable data centers, since working with them requires large investments in infrastructure development. Therefore, in recent years, such a solution as a micro-data center has gained popularity. Micro data centers, or modular data centers, are a set of blocks necessary for assembling a DIY data center. Designing and deploying a micro-data center depends on the client’s wishes: he can choose the required number of racks himself and create a fully-ready data center in a month (we have already written about modular data centers here). Such a data center is beneficial for three reasons: the equipment does not take up much space, the data center does not require special staff to deploy capacity and consumes less electricity than a traditional data center. The technology is already in use in IoT, retail, video streaming or banking. According to Research and Markets, the microCOD market will grow from $ 2 billion in 2017 to $ 8 billion by 2022. At the same time, the average annual growth rate will be 26%.

15 November 2018
94
In what way does cloud hardware matter?
Hosting
In what way does cloud hardware matter?

We are assured that the hardware in the cloud no longer plays a special role, but is it really so? Standard iron copes well with the task On the basis of large reserves of mass-class hardware, modern cloud service providers create an environment for computing as a commercial service. This approach is called overscaling. It allows them to use virtualized storage media, a network and computing resources that are deployed on demand according to special rules applied at the time of their allocation. But how does this affect reliability? Providers do not expect that the hardware will never or very rarely fail, instead they foresee equipment breakdowns and cope with them in such a way that they in any way or at a minimum affect the services provided to customers. In other words, with this approach, the provider does not spend all the money to achieve the desired remaining percentage of fault tolerance. Instead, he buys inexpensive equipment and forms systems with high density of iron from it, as a result of which he can save on infrastructure scale due to the reduced area and reduced requirements for cooling and electricity. These savings are then reflected in the prices offered to the buyer. It turns out that a large cloud provider is able to provide the same, if not higher, reliability, like those who deploy more expensive infrastructure. At the same time, he has more opportunities to configure his systems, he can add the functionality required in a multi-tenant hyper-scaled cloud environment, and remove all unnecessary. Does hardware class matter? The question remains: when buying cloud services, does it make any difference what equipment is used there? Taking into account the fact that the cloud provider offers services, solutions and applications, the formal answer is no. But looking at cloud providers offering the richest set of corporate services, it becomes obvious that the basis of this menu of services is hyper-scaled hardware. In itself, the equipment of corporate customers does not care, but it is precisely it that is responsible for the reliable operation of applications at a price that these customers are willing to pay. And this already gives the question of the choice of iron great importance.

2 February 2019
50
Losses and the number of cyber attacks on the companies are constantly growing
Hosting
Losses and the number of cyber attacks on the companies are constantly growing

The business faces hacker attacks regularly every year, month, day. Of course, you will not hear from companies official statements. It is unlikely that anyone will want to admit their own vulnerability and undermine the trust of customers A recent study by Radware found that at least one in five companies faced information threats every day. If you look through the statistics for 2017, it turns out that these companies were 62% less The goal is reputation Of great concern is also the reduction in the number of companies reporting about hacker attacks once or twice a month. Comparing this fact with the previous statistics, it can be concluded that companies are now exposed to attacks more often. Modern hackers can hardly be reproached for lack of fantasy. Their approaches have become more diverse, and the attack vectors also change periodically. For example, DDoS attacks can be made in two formats, volumetric and non volumetric. At the same time, their goal remains unchanged as much as possible to overload the server on which the necessary application is running, with requests to make it unavailable for other users. In 2018, the number of network DDoS attacks increased by an average of 12%. At the network level, the amount of unwanted ICMP traffic has increased by 44%. Each of the 13 organizations have been attacked by VoIP. The cost of attacks and threats in 2019 One of the pressing issues of interest to many is the cost of the consequences of a single cyber attack. In addition to reputational damage and loss of customer confidence, an average of one incident costs the company $ 1.1 million. This amount is 52% higher than the figure received by Radware in the survey for 2017. At the moment, the biggest concerns of companies are caused by such potential threats to information security as application vulnerabilities (34%) and permanent denial of service (19%). Interestingly, in different regions of the company voiced various concerns about the consequences of incidents. For example, in North America, data breaches are most feared, and in the EMEA region, including Europe, the Middle East and Africa, they are mostly worried about the damage that a hacker attack can do to a company’s reputation. Statistics on cyber incidents in the information era are unlikely to ever please us with low rates. However, the fact that more than 60% of companies, realizing all the potential information risks, already have a plan in case of force majeure, gives hope that the attackers will not be able to achieve their ignoble goals.

16 February 2019
19
Hosting

What DC should expect from data sphere by 2025?

Experts identify three main locations where digital information is generated and processed today: the core, the edge and the end points. The collected data produced in this way constitute the global datasphere, which, according to preliminary estimates, should reach 175 Zettabytes by 2025. Kernel, edge and endpoints The core of the datasphere in IDC is called data processing centers. And the growth in the amount of data in the core is primarily due to the growing popularity of all types of clouds (private, hybrid and public). The core also includes corporate data centers, for example, those through which it manages transmission lines and telephone networks. The edge category includes secure corporate servers and devices that are not located in data centers. This includes server rooms, local servers, cell towers and small data centers located in regions to speed up data transfer. Endpoints include end-user devices such as personal computers, smart phones, industrial sensors, connected cars, and so on. Permanent interaction with data According to IDC, today more than five billion users interact with data in some way. It is expected that by 2025 their number will increase to six billion, that is, make up 75% of the total world population. In the same year, each person using the connected devices will interact with the data 4900 times a day or every 18 seconds. Such heavy statistics is due to the fact that by 2025, thanks to the development of the Internet of things technology, there should be about 150 billion connected devices in the world. Most of them will generate real-time data. And if in 2017 the share of such data was 15% of the entire datasphere, in 2025 it will increase to 30%. The data-driven industries Nowadays, data-driven approaches are most often used in such areas as financial services, industry, health care, media and entertainment. Manufacturers can extend the product life cycle and reduce the number of failures in the product by analyzing the patterns of user behavior thanks to sensors embedded in the connected devices. Conclusion One zettabyte is equal to a trillion gigabytes. According to forecasts, until 2025, the entire global dasphere will be calculated by 175 zettabytes.

16 February 2019
6
Hosting

The global cloud services market contributes to the emergence of new business models

Numerous business cases demonstrate the use of cloud platforms as a catalyst for business growth. Low initial investment, rental model, quick and easy start make companies evaluate the applicability of cloud solutions in their specific situation. According to Gartner, by 2020 the turnover of the global cloud services market will reach $ 411 billion. The fastest growing direction will be IaaS. Crossing $ 40 billion in 2018, this segment could reach $ 83.5 billion by 2021. At the same time, the cloud services market continues to concentrate around three main players that will occupy 70% of the IaaS market: Google, Amazon and Microsoft. Like whose clouds do corporate clients use? While some experts argue that the cloud services market has no national specificity, others hold a different point of view. For example, in the West, Amazon and Microsoft cover dozens of cloud services for almost all the needs of corporate customers. In China, the Alibaba Cloud local market is almost completely monopolized.  In the world, 81% of large enterprises choose a multi-cloud strategy. According to a survey of representatives of about 1000 companies from different countries, conducted by RightScale in 2018, 66% of cloud service users plan to increase their costs for clouds by at least 20% and 17% - by 50 % -100%. For most cloud users, the most important thing is cost optimization and load transfer to the cloud environment. As for the priority cloud initiatives, for 58% of cloud users, the most important thing is to reduce costs by optimizing work with existing cloud services. This is followed by moving to the cloud of additional workloads, improving the quality of financial reporting, optimizing policies for effective management, implementing the cloud-first strategy (targeting computing in the cloud), wider use of containers, implementing the CI / CD approach (continuous integration and software deployment) , expanding the use of public clouds and using a broker to work with multi-clouds.

9 February 2019
2
Silicon photonics technology are coming to DCs
Hosting
Silicon photonics technology are coming to DCs

Silicon photonics is end-to-end data transmission through optical channels technology inside the processor, between computer components, between computers, and between data centers promises two things at the same time. First, the data throughput will increase. Secondly, this not only does not require an increase in power, but even reduces the consumption of interface circuits by several times. The integration of optical circuits into the composition of processors and controllers is favored by the fact that silicon is transparent to infrared radiation and the signal in this range is perfectly distributed through optical waveguides inside the chips. Leading companies, including Intel and IBM, have come close to integrating optical circuits into processors: semiconductor lasers, optical multiplexers, and others. Intel was even on the verge of commercializing proprietary MXC modules and the corresponding data center infrastructure. However, in 2015, the project was frozen, and the company highlighted the Omni-Path signaling interface, which can be integrated into Xeon Phi processors and accelerators. But the future of the data center is still for silicon photonics. GlobalFoundrie, as reported in a recent press release from this contract manufacturer of semiconductors, also believes in the prospect of integrated optical interfaces. On its lines, GlobalFoundries is preparing to release solutions that will carry optical communication lines built into custom LSI and multi-chip assemblies. Integration and transition to intrachip optical interfaces with the possibility of going out to organize optical data transfer between systems will allow up to 10 times the bandwidth and 5 times reduce consumption during data transfer. The basis of silicon photonics GlobalFoundries is the development of American scientists, transferred to commercial operation startup Ayar Labs. We talked about Ayar Labs and its optical interface processors exactly two years ago. GlobalFoundries will adapt Ayar Labs' hybrid interface manufacturing technology to its 45nm CMOS process technology. In turn, Ayar Labs, on the basis of licensing rights, will provide to all interested GlobalFoundries customers Design IP blocks for creating products with optical interfaces. This initiative promises to lead to the emergence of interfaces with a bandwidth of up to 10 Tbit / s.

9 February 2019
2
Hosting

How to improve the efficiency of DC in the process of operation

Improving the efficiency of engineering equipment and improving the performance of operating solutions remains a typical challenge facing owners of DCs around the world. Motivation to improve energy efficiency The operators of DC also turn to increasing energy efficiency when the cooling capacity of the machine room ceases to be enough for them or when the business, being guided by considerations of competitiveness, demands to improve energy efficiency by so many percent. It contributes to a more effective response to the aggravation of the need to control the locations of IT equipment, as well as to control information about who worked with him and when. Stimulates and the need for effective utilization of DC resources and forecasting it at least six months - a year in advance. According to Schneider Electric experts, today on the IT equipment, the owners of the DC are trying to maintain the temperature of 23-25 C. These values are not monitored indoors, but directly at the entrances to racks with IT equipment. Correction of deficiencies in the selection and organization of elements, components and components of the engineering infrastructure identified during the operation also provides an opportunity to increase efficiency. Despite the fact that each data center has its own history and features of development, we propose to combine deficiencies in the engineering infrastructure into several characteristic groups: • features related to the air distribution system from air conditioning units to racks with IT equipment; • features associated with the flow of cold and hot air in the area of racks with IT equipment; • features of disconnection of terminal IT equipment in cabinets. He proposes to link the causes of these shortcomings with the history of the development of operational services of the DC, the change of responsible departments, the ineffective interactions of the various divisions of the operational department, and the high dynamics of the development of the tasks that are assigned to the DC.

9 February 2019
16
In what way does cloud hardware matter?
Hosting
In what way does cloud hardware matter?

We are assured that the hardware in the cloud no longer plays a special role, but is it really so? Standard iron copes well with the task On the basis of large reserves of mass-class hardware, modern cloud service providers create an environment for computing as a commercial service. This approach is called overscaling. It allows them to use virtualized storage media, a network and computing resources that are deployed on demand according to special rules applied at the time of their allocation. But how does this affect reliability? Providers do not expect that the hardware will never or very rarely fail, instead they foresee equipment breakdowns and cope with them in such a way that they in any way or at a minimum affect the services provided to customers. In other words, with this approach, the provider does not spend all the money to achieve the desired remaining percentage of fault tolerance. Instead, he buys inexpensive equipment and forms systems with high density of iron from it, as a result of which he can save on infrastructure scale due to the reduced area and reduced requirements for cooling and electricity. These savings are then reflected in the prices offered to the buyer. It turns out that a large cloud provider is able to provide the same, if not higher, reliability, like those who deploy more expensive infrastructure. At the same time, he has more opportunities to configure his systems, he can add the functionality required in a multi-tenant hyper-scaled cloud environment, and remove all unnecessary. Does hardware class matter? The question remains: when buying cloud services, does it make any difference what equipment is used there? Taking into account the fact that the cloud provider offers services, solutions and applications, the formal answer is no. But looking at cloud providers offering the richest set of corporate services, it becomes obvious that the basis of this menu of services is hyper-scaled hardware. In itself, the equipment of corporate customers does not care, but it is precisely it that is responsible for the reliable operation of applications at a price that these customers are willing to pay. And this already gives the question of the choice of iron great importance.

2 February 2019
50
Dual-Mode Enterprise SSD Controller from Silicon Motion
Hardware
Dual-Mode Enterprise SSD Controller from Silicon Motion

Taiwanese company Silicon Motion announced the development and launch of a new NAND memory controller for the release of enterprise-class SS22 SM2270. The development of the new product has become a confirmation of the company's policy of avoiding dependence on the consumer SSD market. The market for servers and supercomputers is bigger than the market for home devices. Therefore, every developer seeks to dive into the corporate environment whenever possible. Silicon Motion SM2270 controller has one yet unique feature. It can operate in one of two modes: as a normal controller with a PCI Express bus with support for the NVMe protocol or in the Open Channel mode. To do this, the controller is equipped with a regular or customized firmware upon the request of customers. In the Open Channel mode, which only penetrates the drive market, control over the SSD memory array is given to the operating system. For example, a preliminary version of the Open Channel specifications is supported by the Microsoft Denali environment. According to the developers, in some scenarios the operating system will cope better with data management on the SSD than the controller integrated into the drive. In any case, it will be possible to manage data in storages more diversely. Silicon Motion SM2270 controller relies on the powerful "tripled" dual-core ARM Cortex-R5 architecture. The interface supports up to 8 PCI Express 3.0 lanes with support for the NVMe 1.3 protocol. 16 channels lead to the NAND memory array, which gives the maximum SSD capacity up to 16 TB. All NAND memory types are supported, including the latest 96-layer 3D NAND with TLC and QLC cells. The maximum throughput of the controller in reading random 4-Kbyte blocks reaches 800,000 IOPS. In addition, the controller SM2270 supports a number of proprietary technologies Silicon Motion. This is the power loss protection of Power Loss Protection, the 6th generation of NANDXtend technology, which includes error correction with machine learning algorithms, as well as end-to-end data loss protection using SRAM or DRAM buffers. Finally, the new controller can operate in an extended temperature range, ensuring data integrity in a challenging working environment.

2 February 2019
12
Hardware

What is HPE focused on and what innovations to expect

The main focus of the company is on such technologies as artificial intelligence, peripheral computing, autonomous data centers and intelligent storage. Autonomous data center. It is about creating a data center that works in automatic mode and does not require maintenance personnel. It is clear that there can not do without the built-in AI. The company promotes two main products: the Nimble storage system and InfoSight is a cloud-based tool for monitoring and managing the maintenance of this system, which almost completely automates support. InfoSight collects information from sensors, teaches itself, looks for patterns, and then makes recommendations based on the experience gained, and also automatically performs routine actions, such as updating firmware. In the future, HPE is going to extend the effect of InfoSight to all its products and thereby implement AI for an autonomous data center. Artificial Intelligence. In the field of AI, data, means of processing and storing them, as well as the speed of data processing play a huge role. One of the steps in this direction is the acceleration of applications through the use of new high-speed non-volatile SCM storage devices connected via the NVMe protocol to 3PAR and Nimbe storage controllers. HPE in partnership with Intel, is implementing SCM technology in its 3PAR and Nimble storage platforms. According to Vladislav Logvinenko, now the best option is to use NVMe and SCM for caching. Peripheral Computing. The need to perform computational processing outside the data center appeared in connection with the spread of the Internet of Things (IoT). There are two separate worlds: operational technologies and information technologies. The first includes various sensors, sensors and other devices that generate a huge amount of data. But you need to analyze this information quickly. Therefore, it is advisable to do this not in data centers, but as close as possible to the place of data generation. HPE offers for this a family of Edgeline Converged System devices that allow you to remove computing resources from data centers and place them near IoT data sources.

2 February 2019
9
New AMD processor will not require software optimization
Hardware
New AMD processor will not require software optimization

In early January, AMD made announcement of the future 7-nm Ryzen 3000 processors, based on a fundamentally new Matisse design, but many details about their structure and features remained behind the scenes. AMD's Chief Technology Officer expressed an extremely pleasant statement for AMD fans: existing software will not have problems with promising processors, and no additional rework or optimization will be required. This is important, because at one time, the first generation Ryzen processors were seriously affected by non-optimized software, demonstrating in it the performance is noticeably lower than expected. The problem was particularly pronounced in tasks sensitive to delays, for example, in many games. AMD even had to make special efforts to get developers to release patches of software code with special optimizations for the unique Zen architecture, which in some cases really allowed to increase the performance of Ryzen. Now the company assures that the re-optimization of operating systems and software for future Ryzen is not needed. The optimizations that were required when we released the first Ryzen were related to the concept of Core Complex. But AMD successfully worked with Windows and Linux, in which the correct recognition of the Core Complex appeared, and now the workloads work correctly with processors with such an organization. Now, each combined core combines four computing cores and 8 MB of L3 cache, and the Zeppelin chip, on which modern Ryzen processors are based, contains two such complexes and a set of external controllers. In future Ryzen on the Zen 2 architecture, this scheme will obviously change somehow, since the memory controllers and peripheral interfaces will be moved to a separate I / O chiplet. However, chiplets with cores are likely to retain their block construction from Core Complex, which makes it easy to scale the number of cores.

2 February 2019
13
IT giant leaves the chips market for DC
Hardware
IT giant leaves the chips market for DC

Qualcomm has announced that it is closing the line that deals with the production of processors for the data center. Qualcomm released the Centriq ARM processor in November 2017. The chip had great potential in the data center market: it was several times cheaper than Intel's top models, had similar performance and less power consumption. Reasons of failure One of the reasons why Qualcomm decided to stop working on its chip is the high popularity of x86 processors. The transition to the ARM-architecture will require the processing of software code for applications that run on data center servers. Not all companies are ready to take that risk. For this reason, Qualcomm could not find a sufficient number of clients for the device. The lack of orders is evidenced by the fact that the IT giant has never mentioned Centriq in its financial reports. The second reason is related to the struggle of Qualcomm for its own independence. At the end of 2017 and the beginning of 2018, Broadcom Singapore tried to absorb the company several times. The deal did not take place because of the ban of the US government, but investors still Qualcomm worried about the future of the organization and questioned its viability. Representatives of the company itself say that in the future Qualcomm will focus on devices for 5G networks and peripheral computing. In this case, the company is not going to completely abandon the chips for data centers. Centriq technology will be developed in China under the Thang Long brand in collaboration with Baidu, Alibaba and Tencent. Future of the market According to experts, processors from other manufacturers, in particular Cavium, can become a replacement for Centriq. Company released own ARM chip Thunder X2 in 2018, and the device is already in demand in supercomputers. The refusal of Qualcomm to develop Centriq may be an opportunity to increase the popularity of Thunder X2 in other areas. IBM developed the 7nm processor in conjunction with Samsung. The device will be used in the IBM Z cloud service, as well as in LinuxOne servers. According to IBM, the new processor is designed to work with machine learning technologies and will be released in 2020. Cloud computing is one of the main drivers of the development of the processor market. And ARM-chips have every chance to capture a significant market share. Recently, a rebranding technology was launched for architecture called Neoverse.

23 January 2019
20
Hardware

New family of SSD from Micron company with up to 7.68 TB capacity

One of the largest players in the NAND flash memory and solid-state drives market the Micron company introduced a new SSD family for corporate customers. Devices with laconic names 5200 PRO and 5200 ECO are successors of models 5100. However, unlike the latter, they will be available only in the 2.5-inch / 7-mm form factor. The rejection of the design version of M.2 is due, on the one hand, to the use of the slow SATA 6 Gb / s interface, and on the other hand, a small demand for the corresponding products. The basis of the new models is the time-tested Marvell 88SS1074 controller and the 64-layer Micron 3D TLC NAND flash chips. New items are primarily interesting for their volume and resource. However, their performance indicators cannot be called outstanding, for example, the amount of IOPS when writing is not particularly impressive, which varies from 9,500 to 33,000. For example Toshiba presented SSD with much higher numbers. The sequential read and write speeds reach 540 and 520 MB / s for almost all 5200 drives. The exception is Only 480 GB version of the Micron 5200 ECO. The power consumption of each of the seven SSD is 1.5 watts in idle, up to 3 watts when reading and up to 3.6 watts when writing. TLC memory, especially multi-layer, is now actively used by SSD manufacturers in server and workstation products. There is no doubt about its pretty good resource, besides Micron provides a five-year warranty on both 5200 PRO and 5200 ECO. There are only two models with the PRO prefix is 960-gigabyte and 1.92-terabyte. Their resource is respectively 2.27 and 5.95 Pbytes. By the way, the durability of cells in the 5200 PRO is not so very different from that in the 5200 ECO. The resource of a 7.68-terabyte disk is 8.4 PB. Both Micron 5200 SSD series support AES encryption algorithm (256-bit). New items are recommended as a universal solution for virtualization, the inclusion of cloud storage, transaction processing systems, AI assistants and media servers.

19 January 2019
12
Cloud equipment production volume show strong growing trend
Hardware
Cloud equipment production volume show strong growing trend

According to IDC, the volume of the global cloud infrastructure market in the third quarter of 2018 increased by 47.2%. Analysts estimate that global sales of server equipment, network equipment and storage systems used to deploy cloud services amounted to $ 16.8 billion versus 11.4 billion a year earlier. Sales of hardware in the category of private clouds increased by 28.3%, reaching $ 4.7 billion, while sales in the category of public clouds increased by 56.1% to $ 12.1 billion. The share of cloud equipment for the first time accounted for more than half (50.9%) of the total expenditures on IT infrastructure, while in Q3 in 2017, this indicator was measured at 43.6%. At the same time, the market for equipment for local IT infrastructures showed an increase of 14.8%. Since this market is going through a technological renewal cycle, which is expected to end this year, its growth will slow to 12.3%. By 2022, hardware purchases for local data centers will be 42.4% of total IT infrastructure costs, while in 2018 their volume was 52.6%. Dell EMC remains the largest manufacturer of equipment for the organization of cloud services. Revenues of the combined company in this market reached $ 2.4 billion, which is 50.7% more than a year ago. Following behind, Hewlett Packard Enterprise ended the quarter with $ 1.6 billion in sales, up 14.2% from a year earlier. The top three closed Cisco with revenue of $ 1.1 billion and sales growth of 16.4%. ODM vendors selling equipment directly to data centers earned a total of $ 6.1 billion on cloud infrastructure. This is 34% more than the result of the same period in 2017. Geographically, the highest rates of growth in sales of cloud technology in Q3. recorded in the countries of the Asia-Pacific region (excluding Japan) and China, 62.6 and 88.7%, respectively. In Japan, there was an increase of 48.2%, in the USA by 44.2%, in Canada by 43.4%. According to IDC, Amazon has accounted for the lion's share of the increase in investment in cloud IT equipment. The largest hyper-scalable data center operators Google, Alibaba, Microsoft, Facebook, Apple, Baidu and Tencent continue to actively prepare to expand their computing power and continue to transfer their own infrastructure to the cloud.

19 January 2019
5
Experts doubt in the need of transition to IPv6 until 2023
IP-resources

Experts doubt in the need of transition to IPv6 until 2023

The crisis of free IPv4 addresses exhaustion divided the network community into two camps: conservatives, IPv4 supporters, and innovators actively promoting for IPv6. A representative of the leadership of Brocade, said that many customers of the company faced problems of migration to the new standard protocol. IPv6 is not compatible with IPv4, so for the transition it is necessary to replace the old equipment with new ones. Internet providers participating in the event promised that they would have at least 1% of IPv6 users connected to the launch date. Cisco and D-Link say their goal is to transfer all their products to IPv6 for routing. The transition from IPv4 to IPv6 may cause problems for some organizations, but large ISP and Internet companies, including Facebook and Google, have already invested a lot of power in IPv6 distribution. What it is all about? The IPv4 addressing scheme is limited to about 4.3 billion available addresses, most of which are currently assigned. For an exponentially growing fleet of devices connected to the Internet, the situation can turn into a serious problem and the inability of many global network subscribers to access the web. IPv6 should solve the problems, however, there were also opponents of this migration. Conservatives claim that the current addressing system can be used forever, the use of NAT will help, and there are no obvious prerequisites for the transition to Ipv6. Innovators believe that it is not economically feasible to continue to use old technologies, moreover, even the use of NAT cannot solve the problem completely. According to them, in the next 18 months, the Internet should fully switch to the use of IPv6. And although some manufacturers propose using both transport protocols at the same time, there are still many unresolved issues on this issue, the main one of which is the price of such a decision. Mr. Stewart said that Brocade has prepared a special NAT gateway for translating IPv4 to IPv6, which will help equipment that is incompatible with IPv6 hardware to function normally even after a full-scale transition. However, according to Brocade, more and more companies prefer IPv6.

29 December 2018
11
The number of users using IPv6 to connect Google servers has reached 25%
IP-resources
The number of users using IPv6 to connect Google servers has reached 25%

According to Google statistics, the total number of IPv6 connections to the company's servers as for October 2018 exceeded 25%. Considering the mass nature of Google and the company's presence in all major markets except China, these statistics can be called moderately relevant and, on its basis, state that the world is gradually moving to IPv6 puncture with IPv4 which are seriously lacking. The most active countries in using the IPv6 protocol are Belgium (52.68%), Germany (39.14%), Greece (36.53%). The United States, India, Uruguay and Malaysia are a little behind this level. The most acute problem of reducing IPv4 addresses has become with the development of the market for IoT-devices that have a direct connection to the Internet, bypassing local routers and other access points. We are talking about smart home technologies, remote control devices and even gate relay control modules, which often use the mobile Internet instead of Wi-Fi technology. Also, devices with direct access to the Internet are ubiquitous in areas such as security and fire alarm systems, and use the SIM card of mobile operators as a communication module. Why is it so slow? IPv6 technologies are already more than 20 years old, but it began to receive obvious distribution only in the last five-year period, when providers faced the problem of lack of real IP addresses. The main reason to reject the implementation of IPv6 is the banal inertia of any network engineer and their motto: if it works, then it is better not to fix it. Due to optimization, in IPv6 packets, many things that do not directly affect routing were rendered in the extension header. This has significantly accelerated the work of the IPv6 network, but brought its own problems. Thus, the IPv6 architecture has generated new vectors for network and LAN attacks, which are guided by the same principles that were used for IPv4 networks. At the same time, IPv6 makes it difficult to control and monitor traffic, which, of course, does not suit providers that provide limited access to the network. In conclusion, it should be said that over the past ten years we have witnessed a massive preparation for the introduction of IPv6 and the transition to its immediate and widespread use. The vast majority of modern devices and operating systems support IPv6.

22 October 2018
5
IP-resources

Thinking of buying IP addresses? Reasons to do it

Any computer connected to the Internet, have to be assigned a unique (in global terms) digital combination, called IP-address. This allows the various programs operating in the network, to work together to provide a single information space. That is why buying IP addresses is critical for those who working on the e-market, in the hosting industry or having a lot of devices in their internal infrastructure. Many can say that you can rent this resource. But you need to make sure than that you will receive own dedicated address. Principle issuing dynamic IP-addresses such that the issued address is assigned to the user as long as it is in the on-line connection to the server. If the user stops the connection and enters again, it will be assigned a computer has a new IP-address. Permanent same IP address does not change, it is called dedicated. Since the Internet network is growing very quickly, and address space is limited, the IP-addresses are used very sparingly, and now, as a rule, people are used to work scheme with time dynamic (per session) IP-address. For many users are buying IP addresses quite important. Dedicated IP-address can be used to create a stable secure connection for the exchange of information in the VPN, FIREWALL and others. Many hosting providers, Internet providers and data centers can offer you buying IP addresses service. Also, you should keep in mind that price of IPs are constantly growing: for 13-15% yearly. As well as growing price for the RIR membership. So if your company is definitely will need IPs it is better to buy it sooner. The advantages of buying IP addresses  - It provides secure access to your resources;  - Possibility of WEB-servers, mail and other servers on office computers;  - Possibility  of distributed corporate networks with remote access to them via the  Internet;  - The  ability to access the information resources with limited access  where  the user is identified by the IP-address

8 September 2018
18
Advantageous IPv4 rental in details
IP-resources
Advantageous IPv4 rental in details

Many companies are considering the possibility of renting IPv4 addresses, while they on the stage of systems migration on IPv6. This step can be a more acceptable option for several reasons. Firstly, a typical IP address rent estimate ranges from $ 10 to $ 20 per IP address per year. Second, companies that actively transfer IPv6 in a short time can simply find IPv4 address rentals in a simpler and more cost-effective way. After completion of the transfer of rights, tenants simply return addresses as soon as they are no longer needed. Even if the lease terms are quite expensive the total cost will still be lower, a purchase of IP. Companies which are familiar with this procedure can help ease the process of renting IPv4 addresses, by engaging stakeholders and assisting in the negotiation process. Rental of IPv4 addresses became the basis for a new type of business in hosting industry. Hosting companies that have hosted websites or servers with an IP address for free will now charge customers for using this IP address. The fee is usually set at about $ 1 per month. Nevertheless, if the hosting company buys a block of IP-addresses to use the server at $18 - $19 for the address, and then hands over to customers at a price of $ 1 per month for the address, only after 12 months the purchased addresses will be profitable. If we multiply these figures by renting thousands of addresses within several years it becomes clear that the profitability of this business is obvious. For companies that need to rent IPv4 addresses, it is necessary to learn many factors, each of which contains its own levels of complexity.  

8 September 2018
7
IP-resources

Sale of IPv4: from agreement till receipt of payment

Having the right strategy is the key to the successful sale of IPv4. You must control the sale from the very beginning of the process until you receive payment, and the buyer get all rights. You know that you have valuable and claimed assets like IPv4 addresses, but are not sure how to monetize it, and what risks should be avoided in the process. With the advent of IPv6, your IPv4 addresses are an asset sooner or later become obsolete, and you should get the best price out of available in the current market conditions. It is necessary to find all the options for selling IPv4, whether they are in the region or outside the region. For those who only want to reduce the amount of IP's, you can sell only a part of your block, while keeping the amount you need. To get acquainted with the current prices, to find out how many IPv4 addresses you have today, you need to analyze the tables on the Pangnote platform or on numerous auction sites. We provide details of organizations which are considering the option of selling IPv4 to ARIN, APNIC and RIPE platform with hundreds of registered participants, as well as expertise and customer service in registrar organizations. We will work with you to monitor the entire process of selling IPv4 from start to finish. Take into account the following features: The size of the transmitted block can not be less than / 24. The receiving LIR must confirm that it will create sub-blocks of the ASSIGNED PA type. According to the RIPE NCC guidelines, the received unit is not allowed to be re-transmitted within 2 years (24 months). The seller must have access to the account in the RIR database. Payment will be with an independent guarantor, while in the base of your RIR your MNT will not be replaced by the MNT of buyer. After that, the guarantor transfers to your account the full amount of the purchase.

8 September 2018
7
How to buy IPv6 address?
IP-resources
How to buy IPv6 address?

What for? Increased capacity of the addresses by using 128-bit addresses instead of 32-bit is the most important feature of the new-the IPv6 protocol. The goal of IPv6 implementation is simple: their amount will be more than enough for all, and you will never have think about its lack. This is very important, because it can simplify routing. Routers usually connect a relatively small number of networks. The simplest example is your home router, which connects your local network to the Internet. For each packet that it receives, it must do one of three things: discard it, redirect it to the internal network, or redirect it to an external network. But let`s be fair, Internet works fine even now, considering that there is a problem with the free IPv4-addresses. The only purpose to buy this resource is to upgrade own system. Certification of IPv4 and IPv6 address resources you can ask for Ipv6 allocation if you are already your RIR`s member. It allows you to uniquely identify a network prefix with the number of autonomous system and thus increase the priority of its prefixes in the global routing table of the routers. These services are provided free of charge and it is one more reason to get IPv6 address. But if you will need additional IP subnet, than you will have to find another member who will be ready to transfer it to you. What is the procedure? Having LIR status, you can ask allocation on leasing terms (individuals or entities) PA-blocks of IPv6-addresses from another LIR network. The process takes no more than one day, and you receive / 48 IPv6-unit (2 ^ 80 addresses). If you need IPv6-network even larger from / 32 to / 29 (2 ^ 99 addresses) and the opportunity to register in it smaller network for own users in the any RIR - you will approach our service to obtain the status of LIR.

6 September 2018
18

Do you like cookies? 🍪 We use cookies to ensure you get the best experience on our website. By using our website you agree with our policy!

I AGREE