With the emergence of hyper-convergence in the domestic market and gradually becoming mainstream, a technological change aimed at IT infrastructure has quietly arrived. The traditional IT architecture that has been in use for more than two decades is facing a severe impact from the new generation architecture. In the past few years, in the face of huge market opportunities, hyper-converged vendors have also sprung up. The digital transformation and development of enterprise users benefited from the innovation and promotion of hyper-converged architecture.
Is hyper-convergence a concept or a technology?
Hyperconvergence is an IT infrastructure construction method. Its core idea is to use general hardware and software definition to implement various services of the IT infrastructure, including: computing, storage, disaster recovery, operation and maintenance management, etc., and these services are all On a unified platform.
The concept of hyperfusion includes three elements:
Use general-purpose hardware: Specifically, it is an x86 server, so if an IT platform with proprietary hardware appears, it is not a hyper-convergence. For example, the storage controller in traditional centralized storage belongs to proprietary hardware;
Software definition: That is, hyper-converged IT services are implemented by software running in x86 servers, such as: distributed storage software for data storage services; in contrast, traditional IT services are mostly written with proprietary hardware and functional logic. In the firmware
Unified platform: All IT services must belong to the same software stack under one platform, which is different from the previous IT service architecture method. Each service will belong to a different platform. It is necessary to distinguish between storage devices and servers on the market. Converged solutions delivered by network switches in a cabinet, such as VCE Vblock, are completely different from hyperconvergence.
It can be seen that hyper-fusion is a concept, not a technology, and it is important to understand this to understand hyper-fusion. The difference or gap between different hyper-converged products and vendors is not in the concept itself, but in the technology and implementation behind the hyper-converged. A simple analogy: "car" is a concept, the dictionary explains: "usually a four-wheeled auto, used for street and road transport." Here, as for the realization of this concept, there are no rules and restrictions, so whether it is a pure electric Tesla, Foton small card, golf cart, all meet the above definition, but the use scene and the implementation behind it can be described as a world of difference .
What challenges does the data center face
Difficult to predict and manage the cost of IT infrastructure construction: As the number of years of IT construction in the data center gradually increases, the traditional infrastructure infrastructure investment cannot meet the demand;
Performance bottlenecks after device aggregation: With the gradual expansion of front-end services, access to centralized storage is increasing, and performance bottlenecks in back-end storage and SAN transmission links have gradually become apparent;
It is difficult to expand the system horizontally and form an island of resources: in order to solve the performance bottleneck of the front-end and back-end horizontal expansion, many devices and services form a chimney-like architecture, which is not conducive to maintenance;
O & M management tasks are heavy and complicated: traditional architecture equipment configuration is complex, multiple sets of environments require special personnel to maintain, and the delivery cycle is long;
In response to these many challenges, it is urgent to plan an existing data center hyper-converged architecture.
Data Center Development
Virtualization and traditional architectures have been deployed in most customer environments, but they still fail to meet the requirements of low cost, high performance, flexible expansion, and easy management.
What are the advantages of a hyper-converged architecture?
The hyper-converged architecture integrates virtualized computing and storage on the same system platform. Now more and more enterprises are beginning to accept hyper-convergence and begin to transition to hyper-converged architecture, but what is the difference between a hyper-converged architecture and a traditional architecture? What are the advantages of a hyper-converged architecture?
The use of traditional equipment increases the difficulty of implementation and management than the use of hyper-converged equipment, and later requires maintenance personnel dedicated to SAN storage to maintain the system.
Scalability and performance
Traditional architecture: When administrators need more storage, they buy storage; when they need more computing power, they buy servers; the added equipment can only meet a certain amount of data requirements, and it cannot improve system performance. At that time, the performance of the system was significantly reduced. The installation and deployment of newly-added equipment has a large workload and tedious operations. In order to reduce data risks, it is generally necessary to shut down the entire system and operate it. In this case, it will affect user continuity and work efficiency. As more and more servers access centralized storage, performance bottlenecks will become increasingly apparent.
Hyper-convergence can seamlessly add other hyper-converged nodes without downtime, thereby linearly improving system performance and data storage.The dynamic clustering method enables computing and storage to expand one node at a time, so there is no need to over-deploy infrastructure and expand the system There is no uncertainty. There is no single point of performance, which is used to solve system performance problems.
Traditional architectures have a single point architecture and rely on hardware reliability. When a failure occurs, manual intervention is required to recover. The hyper-converged architecture is used for distributed data deployment, and has automatic recovery capabilities of software assurance.
Number of controllers
The traditional storage standard is configured with two controllers. When one controller fails, the remaining one controller will have a sharp increase in pressure and the risk will increase exponentially. To avoid this situation, two storages are usually used for redundancy, but it inevitably increases the hardware cost and technical difficulty during deployment. Hyper-convergence can provide a higher level of hardware fault tolerance, and can ensure that when a certain controller fails, the application system is kept non-stop and provides stable performance. Even when multiple controllers fail at the same time, it can ensure uninterrupted operation of business systems.
Data storage medium
Traditional storage has no SSD hard disk as standard. If you choose an SSD hard disk, the price is very expensive. If you want to make full use of the performance and effect of the SSD hard disk, there are certain requirements for the technology of system maintenance personnel. The standard configuration of a hyper-converged device includes an SSD hard disk and built-in automatic data tiering function, which can maximize the performance of the SSD hard disk without manual intervention.
Hyper-converged equipment saves cabinet space more than traditional architecture equipment and improves user data center (IDC) resource utilization.
The energy consumption of IT equipment will determine the important costs of later operation and maintenance and expenditure.
The traditional SAN architecture will greatly limit the expansion of later virtualization systems, and the hyper-converged architecture is easy to expand.
The traditional storage data tiering is a charging function, which requires manual configuration of the function to optimize parameters, which increases the difficulty of accurate configuration by the maintenance staff. Hyper-converged distributed file system, fully automatic, intelligent layering of cold and hot data, real-time monitoring of data heat, and timely migration of data to ensure the best read and write performance and experience of user data.
The hyper-converged data deduplication function can ensure that more IO operations can be completed in memory and SSD, increasing the effective data storage space, thereby further improving system performance, reducing space waste, and also extending the storage expansion and update cycle.
In the case of traditional storage RAID5, damaging two hard disks will cause the user's desktop to crash or data to be lost and cannot work properly. Only when the original faulty hardware is repaired, maintenance personnel can manually restore its fault-tolerant state and recover user desktops lost due to the fault, but the application data integrity cannot be guaranteed. Hyper-converged user data has two copies, hyper-converged distributed file system, multi-node parallel, even if the entire node fails, the distributed file system only takes 15-20 minutes to complete automatically on other nodes in parallel The data is rebuilt and restored to a fault-tolerant state (at which point another node can fail).
Traditional architecture: Third-party tools must be used to achieve disaster recovery functions, and management is complex and investment is high. Hyper-converged devices already have disaster recovery tools, no need to pay for disaster recovery functions, and simple management. Data disaster recovery to various levels of application disaster recovery functions.