Creating virtual servers and installing operating systems
Virtual machine templates allow you to create a fully configured virtual server with an installed operating system in a few mouse clicks from the cloud provider’s control panel.
Migration of physical servers to a virtual environment
Usually, by the time a decision is made to deploy a virtual infrastructure, an organization already has a physical infrastructure, including physical servers and telecommunications equipment.
Physical servers can be turned into virtual ones using various specialized tools:
- dd command in linux
- disk2vhd program in Windows
- VMware Converter Standalone
- specialized commercial products
There are systems that should not be migrated to the clouds:
- Programs that require physical hardware to run.
- Hardware such as electronic security keys used for copy protection cannot be virtualized. Sometimes such problems can be solved by switching to an electronic key.
- Systems that need high performance
- Applications and devices that load RAM, hard disk, processor, video card are not suitable for migration
- Virtualization Licensing Software
- Some license agreements prohibit the use of products in a virtual environment.
- Critical applications for no testing
- Don’t migrate a system or application that is vital to your organization to a virtual platform without first testing it.
- IT systems requiring enhanced security
- Virtualization of any system containing sensitive data can pose an IT security risk.
Installing and updating applications
When using a rented virtual infrastructure (IaaS), you do not have to keep track of server software updates and communication equipment. But the virtual machines themselves require regular updates to both operating systems and applications.
A common problem in large infrastructure is image propagation. This multiplication leads to a huge number of virtual machine templates with the same operating system, but with different applications or roles. Managing and keeping this zoo of images up to date is not easy, but products such as System Center Virtual Machine Manager allow you to reduce the number of images and install applications, roles, and features on virtual servers using application packages.
Installing SSL certificates
If your organization develops or uses web applications located in a virtual infrastructure, then access to these applications must be protected using encryption.
You can, of course, use self-signed certificates or free Let’s Encrypt certificates, but they have significant drawbacks:
- Limited expiration date
- Free Lets’ Encrypt certificates are valid for 90 days. For auto-reissue, you need to set up a scheduler that will run an auto-reissue script or accept CA settings that can make configuration changes to your servers.
- Not all domains can be secured with Let’s Encrypt certificates
- These certificates
- Lack of technical support
- The Let’s Encrypt website only has documentation in English. Some issues of release and use are on thematic forums.
- No extended checks
- When a certificate is issued, only the ownership of the client’s domain is verified. Thus, anyone on the Internet can create a certificate with the name of any organization.
- Browser support issues
- When using Let’s Encrypt certificates, some browsers display messages stating that the browser does not trust the certificate. Basically, this happens when using outdated browsers and operating systems.
- Self-signed certificates display “The security certificate is not trusted” message
- This inscription will scare away potential customers. Such certificates can only be used in communication between users who are aware of the self-signed SSL certificate and have validated it in the browser.
- Self-signed certificates do not really validate a company
- They only provide secure data transfer.
Trusted certificates are the only reliable solution for securing web applications. Cloud service providers may offer to issue trusted SSL certificates from the control panel.
A virtual infrastructure increases the degree of IT integration while reducing the amount of physical hardware. But the number of network applications and services remains the same and even greater. Therefore, it is necessary to protect the virtual infrastructure in a comprehensive manner, combining network and local protection tools. You can use the following protection mechanisms:
- means of network authentication and authorization of users;
- firewalling both inside the virtualization server between guest machines and along the perimeter of the virtual infrastructure;
- systems for registration, collection and correlation analysis of security events;
- means of access control and delegation of authority to virtual machines
- systems for monitoring the integrity of configurations of distributed components of a virtual infrastructure;
- means of anti-virus protection;
- tools for managing access to elements of the virtual infrastructure.
Keep in mind that a security policy tied to such physical attributes as a physical server, IP address, MAC address does not work in the cloud, as the cloud service provider can change them. The security policy should be bound to boolean attributes. Identification, groups or roles of users, as well as load sensitivity become important.
The access control model should be based on the user. Network Access Control (NAC) defines users and what users have access to. Access control must take into account that users can come from anywhere on the Internet, at any time, from any device, and also request access simultaneously from different devices.
Virtualization technologies give rise to a number of new information security risks. A typical example of an attack on cloud environments is an “antivirus storm” that causes multiple antiviruses to run simultaneously. Such a storm can lead to disruption of the virtual machine. High availability of virtual machines and applications can create many opportunities for attackers. At the same time, one unprotected or infected virtual machine can become a source of threat to the entire virtual infrastructure.
Traditional protection tools, such as agent-based antiviruses, are not always applicable in virtualization conditions. It is worth highlighting three main approaches to anti-virus protection of a virtual infrastructure.
- The first approach is that the virtual environment provides a special programming interface for controlling virtual machines through the hypervisor, and the antivirus is used by it, redirecting all protection to a specialized virtual machine. This allows you to stop using anti-virus agents on virtual machines.
- The second approach is to work according to the old scheme, using antivirus agents that need to be updated and configured. But at the same time, antivirus vendors are trying to provide new opportunities for optimizing the execution of agents in a virtual environment.
- The third approach is not to abandon agents completely, to make them as lightweight and simple to execute as possible, but at the same time to implement most of the analytics on a specialized virtual machine. This one is more versatile, but in some cases may be inferior to the first and second approaches.
Because a virtual machine runs on virtual hardware that doesn’t need to be monitored in the same way as physical hardware, you can remove all hardware agents when migrating a physical server to a virtual platform. In addition, virtual machines boot faster than physical ones, and because of this, the monitoring system may not have time to detect a virtual machine reboot if its data collection interval is too long.
We can distinguish the following performance criteria that monitoring systems will collect:
Traditional performance data collection systems often do not work correctly on virtual machines, so monitoring requires the use of specialized tools that can be either built into the virtualization platform or designed specifically for monitoring virtual infrastructure.
The following criteria are used to evaluate the performance of virtual machines:
For Windows virtual machines:
- available memory size, megabytes;
- average time per write operation, seconds (logical disk);
- average time per write operation, seconds (disk);
- average time per read operation, seconds (logical disk);
- average time per move operation, seconds (logical disk);
- average time per read operation, seconds (disk);
- average time for moving operation, seconds (disk);
- current disk queue length (logical disk);
- the current length of the disk queue (disk);
- disk idle time percentage;
- file system error or corruption;
- little free space on the logical disk (in percent);
- little free space on the logical disk (in megabytes);
- percentage of logical disk idle time;
- pages of memory per second;
- percentage of read bandwidth used;
- percentage of total bandwidth used;
- percentage of write bandwidth used;
- percentage of allocated memory used;
- disk idle time percentage;
- health of the DHCP client service;
- the health of the DNS client service;
- the health of the RPC service;
- server service health;
- total percentage of CPU usage;
- the health of the Windows Event Log service;
- the health of the Windows Firewall service;
- the health of the Windows Remote Control service.
For Linux virtual machines:
- disk access time (s)
- disk read time (s)
- disk write time (s)
- disk health;
- free space on the logical disk;
- free space on the logical disk (in percent);
- free descriptors on the logical disk (in percent);
- performance of the network adapter;
- total processor time in percent;
- the amount of memory available to the operating system, megabytes.
Virtualization complicates the backup task. On the one hand, you can continue to back up using traditional means by installing an agent on each virtual machine that will copy files to the right place. This method is reliable, but may not work correctly in a virtual environment. When agents are launched simultaneously on all virtual machines, a “storm” occurs, during which each virtual machine seeks to capture all server resources to complete the backup task.
Another approach is that the backup can be done at the virtualization server level. In this case, copying data from virtual machines can be much faster.
One of the main tasks of a system administrator is to back up data on workstations and laptops. After all, if a laptop is stolen, data can get to intruders, and users rarely accept encryption themselves.
The topic of deduplication deserves special attention. As we already wrote, new virtual machines are often created from templates, resulting in the data on the virtual machine disks to a large extent identical. Deduplication allows you to find identical blocks on disks and save only one copy in the backup.
If VDI (desktop infrastructure) technology is used, all processed data will be located in virtual servers or storages on the side of the cloud provider. Thus, backup is performed in the cloud and in case of theft of the client device, the data will not fall into the wrong hands.
Optimizing virtual hard disks
Just as hard disks need to be defragmented on physical servers, virtual disks need to be compressed on virtual machines. Compression (compact) is applied to dynamic expandable and differencing virtual hard disks. The Optimize-VHD command reduces the size of a VHD file by removing the empty space left after deleting data from a virtual hard disk. In addition, this command rebuilds blocks for more efficient use of disk space, which also reduces the size of VHD files.
Transferring virtual servers from other systems
What if there is already a virtual infrastructure in the cloud, but for some reason it needs to be transferred to another provider? This is where the virtual machine export feature, . When migrating virtual machines from the Amazon S3 cloud, you can use the bucket in the Amazon S3 control panel.
If the virtual infrastructure was created in the Microsoft Azure cloud, you will have to use the special Save-AzureVhd PowerShell cmdlet, and if in Hyper-V, then the built-in export function of the Hyper-V Management snap-in.
You can also export the virtual server to an OVA/OVF file (an open standard for storing and distributing virtual machines), and then import the resulting file using the new provider’s control panel. .
Setting up multi-factor authentication
For secure access to the cloud infrastructure, one password is no longer enough. Multi-factor authentication provides additional authentication in addition to user credentials. Possible multi-factor authentication options include:
- mobile applications;
- phone calls;
- text messages.
Virtual Infrastructure – Opportunities or Risks?
Although the administrator of a virtual infrastructure no longer needs to take care of the physical equipment, he has no less worries. Administering a virtual infrastructure may require more attention than a physical infrastructure as the infrastructure becomes more complex and more tasks are added. Virtualization brings both new opportunities and new risks that need to be considered.
Welcome to the world of DomainRooster, where roosters (and hens) rule the roost! We're a one-stop shop for all your entrepreneurial needs, bringing together domain names and website hosting, and all the tools you need to bring your ideas to life. With our help, you'll soar to new heights and hatch great success. Think of us as your trusty sidekick, always there to lend a wing and help you navigate the sometimes-complex world of domain names and web hosting. Our team of roosters are experts in their fields and are always on hand to answer any questions and provide guidance. So why wait? Sign up today and join the ranks of the world's greatest entrepreneurs. With DomainRooster, the sky's the limit! And remember, as the saying goes, "Successful people do what unsuccessful people are not willing to do." So don't be afraid to take that leap of faith - DomainRooster is here to help you reach for the stars. Caw on!