1 Introduction
Cloud computing is not a brand new name. In 1961, Turing Award winner John McCarthy proposed that computing power will be provided to users as a utility like water and electricity. In 2001, Google CEO Eric Schmidt first proposed the concept of "cloud computing" at the search engine conference. In 2004, Amazon successively launched cloud computing services, becoming one of the few cloud computing providers that provides 99.95% uptime guarantee. In 2007, with the promotion of IBM, Google and other companies, the concept of cloud computing began to gain widespread attention from the global public and the media.
There is currently no unified definition of cloud computing. According to Wikipedia, cloud computing is a new way of computing based on the Internet, providing on-demand computing for individuals and business users through heterogeneous and autonomous services on the Internet. Berkeley University believes that cloud computing refers to the applications published as services on the Internet and the software and hardware in the data centers that support these services.
The author believes that the concept of cloud computing is not just limited to computing. In fact, it is more appropriate to directly call it "cloud". The essence of cloud is the on-demand service Service on Demand provided by the network to users, including based on providing computing power. Storage capacity, and a combination of various services of network capacity. It is precisely because of the on-demand nature of services that various types of computing resources, storage resources, and network resources corresponding to computing power, storage capacity, and delivery capacity are required, as well as on-demand resource pool management and elastic provision of network resources.
From the aspect of service provision, the cloud can provide three kinds of services: IaaS directly provides virtualized resources to customers on demand; PaaS establishes an application platform that supports multiple services on a virtualized cloud computing platform, and then interfaces and develops applications The environment and operating environment are provided to the outside; SaaS provides on-demand and rapid deployment of application software rental services on a virtualized cloud computing platform. At present, IaaS and PaaS are the most studied in the industry. This article will elaborate on the key technologies, implementation solutions and cloud standardization of IaaS (corresponding to basic cloud) and PaaS (corresponding to business cloud).
2 Key technologies
2.1 Virtualization
Virtualization is the most important technical foundation for cloud computing. Virtualization technology realizes the logical abstraction and unified representation of physical resources. Through virtualization technology, the utilization rate of resources can be improved, and resources can be deployed quickly and flexibly according to changes in user business needs.
2.1.1 Virtualization platform architecture
In a cloud computing environment, virtualization is achieved by running multiple virtual machines simultaneously on a physical host. Multiple virtual machines run on a virtualization platform, and the virtualization platform implements monitoring of multiple virtual machine operating systems and sharing of physical resources by multiple virtual machines.
In general, we believe that the virtualization platform is a three-layer structure, the lowest layer is the virtualization layer, which provides basic virtualization capability support; the middle layer is the control execution layer, which provides the execution capabilities of each control function; the uppermost layer is the management layer , Strategic management and control of the execution layer, to provide unified management of the virtualization platform. As shown in Figure 1, the virtualization platform should include virtual machine monitor hypervisor, virtual resource management, virtual machine migration, fault recovery, and policy management (such as providing virtual machine automatic deployment and resource allocation) and other functional entities. The specific functions of each part are described as follows:
Figure 1 Virtualization platform functional structure
(1) Virtual machine management: Mainly protect the VM creation, start, stop, migration, recovery and deletion capabilities, virtual machine image management, virtual machine operating environment automatic configuration and rapid deployment start and other capabilities. Virtual machine management can automatically migrate VMs between different host nodes based on the host node / virtual machine CPU, memory, I / O, network and other resource usage, so that the performance of the VM is guaranteed. It also includes the failure protection of the host node, that is, when a host node fails, the functional entity can automatically transfer the services on it to other nodes to continue running.
(2) Highly available cluster: used to ensure the failure protection of the host node. When a host node fails, the cluster automatically transfers the services on it to other nodes in the cluster to continue running. The cluster can also have load balancing and storage cluster capabilities.
(3) Dynamic resource allocation: virtual storage, network creation, configuration, modification and deletion capabilities. When a VM has insufficient memory, travel, or network resources, it can temporarily borrow similar resources that are not used by other VMs in the same node temporarily.
(4) Dynamic load balancing: balance energy consumption and workload. According to policy requirements, some host nodes can be turned on / off and the associated VMs can be migrated.
(5) Management tools: contains a set of tools that the virtualization platform needs to support, such as P2V (Physical to Virtual), V2P (Virtual to Physical), VA (Virtual ApplicaTIon), JEOS (Just enough OperaTIng System), etc.
(6) Host security: used to ensure the security of the VM operating environment, including a set of software, such as anTI-virus, IDS, etc.
2.1.2 Virtualization platform deployment
As shown in Figure 2, in actual deployment, clusters, dynamic resource allocation, and host security are closely related to the hypervisor, and can be deployed as independent software in host nodes. Other functions can be integrated together to form a VM manager. After this function allocation, the entire virtualization platform can be divided into two software packages: one is the Hypervisor + Host OS software package, which resides in the host node; the other is the VM manager software package. The interface between the two will be simplified into configuration, simple control, viewing and monitoring.
Figure 2 Virtualized platform system architecture diagram
The host running the VM uses the Cluster function to form a highly available cluster system. When one of the nodes fails, it can automatically migrate the services on the failed node to other nodes without the intervention of the VM manager and redistribute it. Storage and network resources make the service uninterrupted. VM management can be deployed on an independent server, which is responsible for alarming, running status monitoring, and load adjustment of the virtualization platform. The physical deployment of the virtualization platform is shown in Figure 3.
Figure 3 Virtualization platform deployment diagram
2.1.3 Advantages of virtualization platform
(1) Platform virtualization realizes optimal utilization of resources. Use virtualization technology to virtualize multiple virtual machines on a physical server or a set of hardware resources, allowing different application services to run on different virtual machines without reducing system robustness, security, and scalability At the same time, it can increase the utilization rate of hardware and reduce the dependence of applications on the hardware platform, thereby enabling enterprises to reduce capital and operating costs, and improve IT service delivery without being limited by a limited range of operating systems, applications and hardware options. Constraints.
(2) Utilize the characteristics of virtual machines that have nothing to do with hardware, allocate resources on demand, and achieve dynamic load balancing. When the VM detects that the load of a certain computing node is too high, it can migrate it to other lightly loaded nodes or reallocate computing resources within the nodes without interrupting services. At the same time, virtual machines that execute urgent computing tasks will get more computing resources to ensure the responsiveness of critical tasks.
(3) Platform virtualization brings system self-healing function and improves system reliability. When the system server hardware fails, the virtual machine can be automatically restarted. Eliminate the difficulties caused by restoring the installation of operating systems and applications on different hardware. Any physical server can be used as the recovery target of the virtual server, thereby reducing hardware costs and maintenance costs.
(4) Improve the system's energy saving and emission reduction capabilities. Cooperate with server management hardware to achieve intelligent power management; optimize the actual operating position of virtual machine resources to minimize power consumption, which can save operators a lot of power resources, reduce power supply costs, and save energy and reduce emissions.
2.2 Distributed File System
A distributed file system refers to a cloud storage distributed file system developed on the basis of a file system and can be used for large-scale clusters. The main features include:
(1) High reliability: The cloud storage system supports the function of saving multiple copies between nodes to provide data reliability.
(2) High access performance: According to the importance of data and access frequency, the data is stored in multiple copies, and hot data is read and written in parallel to improve access performance.
(3) Online migration and replication: The storage node supports online migration and replication, and capacity expansion does not affect upper-layer applications.
(4) Automatic load balancing: The data on the original node can be moved to the newly added node according to the current system load. The unique shard storage is stored in blocks as the smallest unit, and all storage nodes are calculated in parallel during storage and query.
(5) Separation of metadata and data: A distributed file storage system is designed in a way that metadata and data are separated.
Figure 4 shows the system architecture of the distributed database. among them:
Figure 4 Distributed database architecture
â— FAC: responsible for providing users with file access interface, installed on the server side (API interface provided to applications for file operations; to achieve interaction with FLR and FAS, to complete the data relocation).
â— FAS: Responsible for file scheduling and access, complete read and write operations to the magnetic array, provide data read and write functions to the FAC, complete distributed storage of data.
â— FLR: responsible for the management of metadata, saving the namespace of files and blocks, the mapping of files to blocks, and the location of each block copy.
2.3 Distributed Database
The distributed database can achieve dynamic load balancing and automatic takeover of faulty nodes. It has high reliability, high performance, high availability, and high scalability. It has obvious performance advantages in processing PetaByte-level structured data services. Figure 5 shows the system architecture of the distributed database. among them:
Figure 5 Distributed database architecture
(1) PEC: Parsers & ExecuTIng Controllers (PEC), SQL service access point. Responsible for receiving customer's SQL requests, and directed to a specific Tablet server for data access.
(2) Master: Mainly responsible for the metadata management and scheduling of the database.
(3) TabletServer: responsible for saving the sub-tables of the database.
(4) Space: lock service function of distributed file system. Mainly guarantee the exclusiveness of data when sharing access.
2.4 ZTE CoCloud value-added service cloud (see Figure 6)
Figure 6 ZTE CoCloud value-added service cloud
Based on the above key technologies, ZTE ’s CoCloud value-added business cloud service stack provides IaaS, PaaS, SAAS multi-level cloud service models, including unified open environment UOE / Mahup PAAS cloud platform, cloud storage, multi-service cloud scheduling, ECP, NGCC, VPBX, application factory and other cloud service solutions.
2.4.1 ZTE CoCloud Value-added Service Cloud Advantage
(1) Advanced software self-adaptive automatic deployment method: real-time monitor the resource situation of each business node, increase or decrease business nodes according to pre-customized strategies, can automatically install, start, shut down business software, so that the software according to business node resources Situation adaptive deployment.
(2) Flexible and efficient business expansion method: After simple configuration on the existing cloud computing platform, resources can be allocated for the new business and new business can be quickly developed.
(3) Unified operation and maintenance management: After conducting business based on the cloud computing platform and conducting multiple services based on the same infrastructure, the current operation and maintenance management method needs to be changed, and it is no longer a chimney-like construction and operation and maintenance. The unified cloud management platform allows maintenance personnel to conduct business and equipment management through decentralization and domain separation.
(4) Automatic online upgrade function: Provide a software version center to manually or automatically upgrade service versions and virtualized software versions.
(5) Green energy saving: software self-adaptive automatic deployment at the business level, only necessary business nodes are reserved when the business is idle, reducing the use of virtual machine resources and reducing resource consumption; at the virtualization level, through resource dynamics The combination of a balanced solution and a green energy-saving solution deploys virtual machines on appropriate physical machines, retains the necessary backup physical machines, and manages other physical machines for energy saving, thereby reducing unnecessary energy consumption by operators during business idle To achieve a green data center.
(6) Multi-level disaster recovery: When the hardware fails, the virtual machine management center automatically migrates the virtual machine to other physical machines to achieve disaster recovery at the virtual layer; when the virtual machine fails and cannot be recovered, the application software deployment subsystem will fail The virtual machine resources are released and a new virtual machine is re-applied to run the failed business node.
(7) Smooth expansion: A layered structure is adopted, and a loose coupling is adopted between the business level and the virtual machine level. When physical resources are insufficient, the physical machine is directly added to the resource pool and virtual software is installed. When the capacity is insufficient, the adaptive automatic deployment mechanism at the business level will automatically add nodes to achieve the expansion of the business capacity.
2.4.2 ZTE CoCloud Value-added Service Cloud Optimization Example
Taking the multi-service cloud scheduling of WAP gateways and MMS as an example, assume that the capacity of the existing network WAP gateway is 30,000 TPS and the capacity of MMS is 3,000. The comparison between independent construction and multi-service cloud computing solutions is shown in Figure 7.
Figure 7 Comparison of independent construction and multi-service cloud solutions
3 Cloud computing standardization
3.1 General introduction
At present, there are about 40 standards organizations that conduct cloud computing technology research or related to cloud computing technology at home and abroad, and there are as many as 150 industries conducting cloud computing industry activities.
Figure 8 gives a panoramic view of the current major categories and standards organizations. Standardization organizations include OCCI, OASIS, DMTF, CSA, OMG, SNIA, OGF, etc. The figure shows that the current cloud computing standards organization mainly defines dedicated API interfaces. In the future cloud computing application scenarios, public, private, and hybrid clouds will be supported. Although the relevant proprietary interfaces will still be used, the relevant interfaces and some internal implementation technologies will be implemented using standardized specifications.
Figure 8 Cloud computing standardization view
With the rise of cloud computing, domestic and foreign telecom operators, equipment vendors and government agencies are also actively investing in cloud computing research, with a view to promoting the optimization and integration of network structure through cloud computing technology to reduce CAPEX and OPEX, and can find New profit opportunities and profit growth points, as well as the transformation to an information service society. At present, there is no standard and unified technical architecture for cloud computing in telecommunications networks. If different manufacturers provide different solutions for cloud computing, it will cause interconnection, interworking, interoperation, hardware transfer, etc. between devices of different manufacturers. Problems in this regard will pose obstacles to the development of cloud computing. Therefore, how to build a universal cloud computing platform in the telecommunications network that can support the capabilities of the traditional telecommunications network and the interconnection will be the focus of the cloud computing network architecture of the telecommunications network.
3.2 IETF Clouds Bar BoF
IETF established Clouds Bar BoF at the 77th meeting. Its main focus is on the requirements of cloud computing system-based applications / businesses for protocols, including applications between different cloud computing systems and different layers within the same cloud computing architecture. Standardization of protocols between different functional entities within or on the same layer.
The current Clouds Bar Bof has been held twice. Clouds Bar BoF was held on March 25, 2010 in Anaheim, CA, USA. At this meeting, GAP analyzed and conducted a cloud-based system and business survey, and collaborated with IETF's DECADE, IRTF / VNRG, NFSv4 and other standards organizations to avoid overlapping work.
The second meeting of Clouds Bar BoF (78th meeting of IETF) was held on July 23, 2010 in Maastricht, Netherlands. At this meeting, Clouds received the support of Google, Cisco, Verizon, ALU, Alertlogic and other manufacturers, and gave their respective agreements in the field of cloud computing. The third meeting of Clouds Bar BoF (the 79th meeting of IETF) will be held in Beijing in November 2010.
3.3 ITU-T FG Cloud
At the ITU-T meeting in February 2010, the Telecommunication Standardization Advisory Group (TSAG) discussed and approved the establishment of the ITU-T Cloud Computing Focus Group (FG Cloud). The focus group will consider the standardization of ITU-T within the scope of ITU-T, considering how telecommunications networks support "cloud computing" services / applications in terms of transmission, security, and services.
(1) Identify the potential impact and priority of promoting and implementing telecommunications to support cloud computing and standards development.
(2) Within the scope of ITU-T, investigate the research needs of future fixed and mobile network projects.
(3) Analyze which components will benefit from interoperability and standardization.
(4) Categorize the standard organizations that study cloud computing in the telecommunications field.
(5) Perform hierarchical analysis on the characteristics and functions of cloud computing to estimate the standardized timetable for cloud computing in the telecommunications field.
The Cloud Computing Focus Group can currently be divided into the following two working groups:
â— The benefits and needs of WG1 cloud computing, mainly studying the definition of cloud, ecological environment and terminology; use cases, requirements and architecture; cloud security; cloud infrastructure and network; cloud business and resource management, platform and middleware; the benefits of cloud computing And ICT first needs.
â— WG2 Gap Analysis ITU-T cloud computing standard research roadmap, which mainly studies the activities of cloud computing standards organizations; Gap analysis and action plan of ITU-T cloud computing standards related research.
FG Cloud held its first meeting in Geneva on June 14-16, 2010, and mainly discussed the progress of various standards organizations on cloud computing research. Various research groups within ITU-T have also carried out cloud-related research projects. . The first drafts of "Introduction to Cloud Ecosystem: Definitions, Terminology, and Use Cases" and "Requirements for Cloud Reference Architecture" have been formed. Figure 9 shows the architecture diagram of the cloud ecosystem, including several roles such as cloud user, cloud SP, and cloud service developer.
Figure 9 Cloud ecosystem architecture
The ITU-T Cloud Computing Focus Group represents one of the three major international standards organizations in the world. ITU attaches great importance to cloud computing and will profoundly affect the standardization process of cloud computing in the telecommunications field.
Welcome to reprint, this article comes from the electronic enthusiast network (http: //)
SIP Sockets & Adapters
Antenk high quality, dependable Single In-line Package (SIP) Sockets, Adapters, and Headers are designed with robust screw-machined terminals for superior performance.
SIP Sockets and Adapters Overview
SIP Sockets and Adapters from Antenk are easily customized with a variety of insulator options, including Peel-A-Way Removable Terminal Carriers, molded Solid Strips, and molded Snap Strips that are breakable at .100/(2.54mm) increments to ensure you always have the right size strip on-hand. All models feature our screw-machined terminals with multi-finger contacts for proven reliability and performance.
Replace hand loading operations, define a board-to-board stack height, or provide an easy method to plug and unplug components or boards. Contact us to design the exact SIP socket, SIP adapter, SIP header, or single row connector you need.
Automated assembly compliant.
Tapered entry for ease of insertion.
Wide range of patterns and terminal styles, from 2 to 100 positions.
Solder Preform terminals available for mixed SMT and thru-hole process applications.
Closed bottom sleeve for 100% anti-wicking of solder.
Optional Tape Seal on terminals protects contacts from contamination during board processing.
RoHS compliant insulators and terminals are compatible with lead-free processing - select either Matte Tin or Gold plating.
MACHINED FEMALE HEADER
ShenZhen Antenk Electronics Co,Ltd , https://www.antenksocket.com