Facebook Down Globally: A Case of the Mondays for Facebook, Instagram, and WhatsApp as they go dark midday Monday

Facebook Down Globally

 

Facebook Down Globally: A Case of the Mondays for Facebook, Instagram, and WhatsApp as they go dark midday Monday

 

Some of the biggest social media sites on the planet, including Facebook, went down globally starting at noon EDT and are still not up in some regions. That’s right, no Instagram #motivationmondays or “Ugh, is it Friday Yet?” Facebook posts from your first semester freshman year college roommate. As the sky was falling for millennials (myself included) and your favorite newly-political aunt, the teams at Facebook were scrambling to keep their sites (including Instagram and WhatsApp, of which both are Facebook-owned) operating.

Facebook Chief Technology Officer Mike Schoepfer took to Twitter to address the situation:

“*Sincere* apologies to everyone impacted by outages of Facebook-powered services right now. We are experiencing networking issues and teams are working as fast as possible to debug and restore as fast as possible”

Facebook outages of this magnitude are rare, to have Facebook down globally for this amount of time is something that hasn’t happened in years. To put in perspective just how impactful the Facebook outage is, the term “Facebook down” was Googled more than 5,000,000 times today alone.
The cause of the outage is speculated to be tied to a recently aired “60 Minutes” segment where whistleblower and former Product Manager at Facebook, Frances Haugen claimed that Facebook knows the platform is used to spread hate and that they have tried hiding evidence of it, of course, Facebook denies this claim.

“The interview followed weeks of reporting about and criticism of Facebook after Haugen released thousands of pages of internal documents to regulators and the Wall Street Journal. Haugen is set to testify before a Senate subcommittee on Tuesday.” According to CNN

Jake Williams, CTO of cybersecurity firm BreachQuest mentioned to the Associated Press that this was an “operational issue” caused by human error.

Regardless of the reasoning, I’m sure this will be an issue that will be discussed for quite some time in the technology space as the outage was global and not regional. Facebook shares opened at $335.50 and closed at $326.32, a drop of 4.89%.

Nonetheless, as I’m sure many were beside themselves that they couldn’t post a nice “Los Angeles” filtered photo of their lunch on Instagram to show their followers, we can only hope, for Facebook’s sake, they can have it fixed by the time we want to show off our dinner.

It has been confirmed that per a Facebook blog that the outage was due to a botched configuration change. Facebook posted the following:

Our engineering teams have learned that configuration changes on the backbone routers that coordinate network traffic between our data centers caused issues that interrupted this communication. This disruption to network traffic had a cascading effect on the way our data centers communicate, bringing our services to a halt.”

Information about the depth of the outage continues to grow, it’s reported that Facebook’s internal chat was also down limiting communications within the company, it even went so far as the employee’s keycards began to fail which made them unable to enter certain buildings.

The Krebs on Security blog explains the problem as follows:

“…sometime this morning Facebook took away the map telling the world’s computers how to find its various online properties. As a result, when one types Facebook.com into a web browser, the browser has no idea where to find Facebook.com, and so returns an error page.”

The Facebook campus was only the beginning, due to the sites interconnectivity it stretched to sites that were utilizing Facebook’s authentication process as well, these effects resonated across the board, from those who rely on Facebook/WhatsApp for primary communication purposes, to small businesses unable to get in touch with their customer base, and even the large number of folks in countries where Facebook is their internet.

We will continue to update as information becomes available.

Data Center Risk Assessments

Data Center Risk Assessments

 

Data Center Risk Assessments

Data Center Risk Assessment is designed to provide IT executives and staff a deep evaluation of all of the risks associated with delivering IT service. We need IT infrastructure risk management and monitoring system to monitor everything on datacenter for better performance.

Risk assessments include following:

Datacenter Heat monitoring

Datacenter have racks of high specification servers and those will produce high level of heat. This is one of a physical security risks as server room must be equipped with cooling system with humidity sensors for monitoring. If the cooling system fail, high temperature will cause system failure and that will cause our clients.

Electricity

All electrical equipment’s needs power, UPS will help us to protect Servers and networking devices from power failure but Cooling system will not work when power lost, that will cause high temperature in server room and that will cause server failure. To avoid this we must need to use automatic backup generator so it will help cooling system work all the time while we face any power lost.

Door access

Unauthorized entry to datacenter is major concern, we must conduct a data center vulnerability analysis and monitor data center security as who all entering to our datacenter. Biometric operated door will help us protect unauthorized entry.

Operations Review

We will make sure all the necessary items are monitoring and will make sure all devices are updated. We will conduct maintenance for all devices in our datacenter to provide 100% uptime for our clients. Risk mitigation strategies and a high quality maintenance program keeps operational risks away and equipment in like new condition and maximize reliability performance.

Capacity management Review

Capacity management determines whether your infrastructure and services are capable of meeting your targets for capacity and performance for growth. We will assess your space, power and cooling capacity management processes.

Change Management

A robust change management system should be put in place for any activity.  The change management system should include a format review process based on well-defined and capture all activities that can occur at the datacenter. Basically any activity with real potential for impact on the data center must be formally scheduled and then approved by accountable persons.

 

Best Practices for Data Center Risk Assessment

Effective data center risk management is crucial for maintaining uptime, security, and performance. A comprehensive risk assessment for data centers should include the following best practices:

  1. Identify Potential Data Center Vulnerabilities:
    Start by evaluating physical security risks, power systems, cooling mechanisms, and network infrastructure for weak points. With data center vulnerability analysis, assess risks related to natural disasters, cyberattacks, and human error.
  2. Evaluate Disaster Recovery Plans:
    A robust data center disaster recovery strategy is essential. Test your risk mitigation strategies and recovery plans regularly to ensure minimal downtime and data integrity in case of emergencies.
  3. Perform Regular Audits and Testing:
    Periodic assessments of systems, policies, and procedures help uncover hidden risks including operational risks. Use tools like vulnerability scanners and conduct simulated disaster drills to identify gaps.
  4. Implement Redundancy and Failover Systems:
    IT infrastructure risk management reduces data center security risks by having backup power supplies, redundant cooling, and failover networks. This ensures continuity even during disruptions.

Proactively addressing data center vulnerabilities ensures your organization is prepared for potential threats, minimizing downtime and safeguarding critical operations.

A Look at Data Center Infrastructure Management

A Look at Data Center Infrastructure Management

 

A Look at Data Center Infrastructure Management

 

What is a Data Center

A data center is a physical facility that organizations use to house their critical applications and data. A data center’s design is based on a network of computing and storage resources that enable the delivery of shared applications and data. The key components of a data center design include routers, switches, firewalls, storage systems, servers, and application-delivery controllers.

Modern data centers are very different than they were just a short time ago. Infrastructure has shifted from traditional on-premises physical servers to virtual networks that support applications and workloads across pools of physical infrastructure and into a multicloud environment. In this era, data exists and is connected across multiple data centers, the edge, and public and private clouds. The data center must be able to communicate across these multiple sites, both on-premises and in the cloud. Even the public cloud is a collection of data centers. When applications are hosted in the cloud, they are using data center resources from the cloud provider.

Importance of Data centers

In the world of enterprise IT, data centers are designed to support business applications and activities that include

  • Email and file sharing
  • Productivity applications
  • Customer relationship management (CRM)
  • Enterprise resource planning (ERP) and databases
  • Big data, artificial intelligence, and machine learning
  • Virtual desktops, communications and collaboration services

Core Components of a Data Center

A data center infrastructure\design may include

  • Servers
  • Computers
  • Networking equipment, such as routers or switches.
  • Security, such as firewall or biometric security system.
  • Storage, such as storage area network (SAN) or backup/tape storage.
  • Data center management software/applications.
  • Application delivery controllers

These components store and manage business-critical data and applications, data center security is critical in data center design. Together, they provide:

Network infrastructure: This connects servers (physical and virtualized), data center services, storage, and external connectivity to end-user locations.

Storage infrastructure: Data is the fuel of the modern data center. Storage systems are used to hold this valuable commodity.

Computing resources: Applications are the engines of a data center. These servers provide the processing, memory, local storage, and network connectivity that drive applications.

How do data centers operate?

Data center services are typically deployed to protect the performance and integrity of the core data center components.

Network security appliances:  These include firewall and intrusion protection to safeguard the data center.

Application delivery assurance: To maintain application performance, these mechanisms provide application resiliency and availability via automatic failover and load balancing.

What is in a data center facility?

Data center components require significant infrastructure to support the center’s hardware and software. These include power subsystems, uninterruptible power supplies (UPS), ventilation, cooling systems, fire suppression, backup generators, and connections to external networks.

Standards for data center infrastructure

The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942. It includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four categories of data center tiers rated for levels of redundancy and fault tolerance.

Tier 1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical events. It has single-capacity components and a single, no redundant distribution path.

Tier 2: Redundant-capacity component site infrastructure. This data center offers improved protection against physical events. It has redundant-capacity components and a single, no redundant distribution path.

Tier 3: Concurrently maintainable site infrastructure. This data center protects against virtually all physical events, providing redundant-capacity components and multiple independent distribution paths. Each component can be removed or replaced without disrupting services to end users.

Tier 4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault tolerance and redundancy. Redundant-capacity components and multiple independent distribution paths enable concurrent maintainability and one fault anywhere in the installation without causing downtime.

Types of data centers

Many types of data centers and service models are available. Their classification depends on whether they are owned by one or many organizations, how they fit (if they fit) into the topology of other data centers, what technologies they use for computing and storage, and even their energy efficiency. There are four main types of data centers:

Enterprise data centers

These are built, owned, and operated by companies and are optimized for their end users. Most often they are housed on the corporate campus.

Managed services data centers

These data centers are managed by a third party (or a managed service provider) on behalf of a company. The company leases the equipment and infrastructure instead of buying it.

Colocation data centers

In colocation (“colo”) data centers, a company rents space within a data center owned by others and located off company premises. The colocation data center hosts the infrastructure: building, cooling, bandwidth, security, etc., while the company provides and manages the components, including servers, storage, and firewalls.

Cloud data centers

In this off-premises form of data center, data and applications are hosted by a cloud services provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM Cloud or other public cloud provider.

Top Seven Data Center Management Issues

Top Seven Data Center Management Issues

 

The Top Seven Data Center Management Issues

 

1. Data security

Data center security refers to the physical practices and virtual technologies used to protect a data center from external threats and attacks. A data center is a facility that stores IT infrastructure, composed of networked computers and storage used to organize, process, and store large amounts of data.

Security is an ongoing challenge for any data center. A data breach can cost millions of dollars in lost intellectual property, exposure of confidential data and stolen personally identifiable information. Risk management and securing both stored data and data as it is transmitted across the network are primary concerns for every data center administrator.

Data centers are complex and to protect them, security components must be considered separately but at the same time follow one holistic security policy. Security can be divided into:

Physical security encompasses a wide range of processes and strategies used to prevent outside interference.

Software or virtual security prevents cybercriminals from entering the network by bypassing the Firewall, cracking passwords, or through other loopholes.

 

2. Real-time Monitoring and Reporting

Real-time (data) monitoring is the delivery of continuously updated information streaming at zero or low latency. IT monitoring involves collecting data periodically throughout an organization’s IT environment from on-premises hardware and virtualized environments to networking and security levels

Data centers have a lot going on inside them, so unexpected failures are inevitable. There are applications, connecting cables, network connectivity, cooling systems, power distribution, storage units, and much more running all at once. Constant monitoring and reporting different metrics is a must for data center operators and managers.

A DCIM system provides deeper insights into the data center operations and performance metrics. It helps you track, analyze, and generate reports real-time, so you’re capable of taking well-informed decisions and immediate actions accordingly.

The best example of this software is PRTG. PRTG Network Monitor is an agentless network monitoring software from Paessler AG. It can monitor and classify system conditions like bandwidth usage or uptime and collect statistics from miscellaneous hosts as switches, routers, servers and other devices and applications.

 

 

3. Uptime and Performance Maintenance

Measuring the performance and ensuring uptime of data centers is the major concern for data center managers and operators. This also includes maintaining power and cooling accuracy and ensuring the energy efficiency of the overall structure. Manually calculating the metrics is of no or a very little help in most cases.

A powerful tool like a DCIM system, helps you, as a data center manager, to measure the essential metrics like Power Usage Effectiveness (PUE) in real-time, making it easy for you to optimize and manage the uptime and other performances.

 

4. Cabling Management Issues

Data centers use many cables, and they can become a nightmare to deal with if not managed well. Facilities should find a way to store and manage all cables, from power cables to fiber-optic wiring to make sure they all go where they’re supposed to. Unstructured and messy cabling is chaotic, even on small data rooms. It can make any data center look unprofessional in a heartbeat, not to mention dangerous.

Poor cable management can restrict airflow, especially in small spaces. Restricted airflow puts unnecessary strain on the facility’s cooling system and computing equipment. The challenge here is that IT personnel need to organize and structure all cabling to make future management easier. Scalable infrastructure needs organized cable management because inefficient wiring can cause deployment restrictions.

 

 

5. Balancing cost controls with efficiency

Budgeting and cost containment are ongoing concerns for any department, but the data center has its own unique cost-control concerns. CIOs want to ensure that their data centers are efficient, innovative and nimble, but they also have to be careful about controlling costs. For example, greening the data center is an ongoing goal, and promoting energy efficiency reduces operating costs at the same time that it promotes environmental responsibility, so IT managers monitor power usage effectiveness. Other strategies such as virtualization are increasing operating efficiency while containing costs.

 

6. Power management and Lack of cooling efficiencies

In addition to power conservation, power management is creating a greater challenge. Server consolidation and virtualization reduce the amount of hardware in the data center, but they don’t necessarily reduce power consumption. Blade servers consume four to five times the energy of previous types of data storage, even though they are usually more efficient overall. As equipment needs change, there is more concern about power and cooling demands.

Without proper monitoring and management, it’s challenging to be efficient in your data center management and operations. Charts and reports provide the information needed to determine cooling infrastructure utilization and potential gains to be realized by airflow management improvements, such as environment improvements, reduced operating costs, and increased server utilization.

 

 

7. Capacity planning

Maintaining optimal efficiency means keeping the data center running at peak capacity, but IT managers usually leave room for error—a capacity safety gap—in order to make sure that operations aren’t interrupted. Over-provisioning is inefficient and wastes storage space, computer processing and power. Data center managers are increasingly concerned about running out of capacity, which is why more data centers are using DCIM systems to identify unused computing, storage and cooling capacity. DCIM helps manage the data center to run at full capacity while minimizing risk.

How to install open DCIM on Ubuntu to simplify data center management

How to install open DCIM on Ubuntu

 

How to install openDCIM on Ubuntu to simplify data center management

 

Managing your data center infrastructure can be a nightmare unless you have the right tools. Here’s how to install one such free tool called openDCIM.

If you’re looking for an open source data center infrastructure management tool, look no further than openDCIM. Considering what you get for the cost of the software (free), this is a web-based system you’ll definitely want to try. openDCIM is a free and open source for Data Center solutions. It is already used by a few organizations and is quickly improving due to the efforts of its developers. The number one goal for openDCIM is to eliminate the excuse for anybody to ever track their data center inventory using a spreadsheet or word processing document again. We’ve all been there in the past, which is what drove us developers to create this project.

With openDCIM you can:

Provide asset tracking of the data center

Support multiple rooms

Manage space, power, and cooling

Manage contacts’ business directories

Track fault tolerance

Compute Center of Gravity for each cabinet

Manage templates for devices

Track cable connections within each cabinet and each switch device

Archive equipment sent to salvage/disposal

Integrate with intelligent power strips and UPS devices

If you have an existing Ubuntu server handy (it can be installed on a desktop as well), you can get openDCIM up and running with a bit of effort. The installation isn’t the simplest you’ll ever do; however, following is an easy walk-through of installing this powerful system on Ubuntu.

 

Installing openDCIM

If you don’t already have a LAMP stack installed on the Ubuntu machine, do so with these simple steps.

Open a terminal window.

Issue the command sudo apt-get install lamp-server^

Type your sudo password and hit Enter.

Allow the installation to complete.

During the installation, you’ll be prompted to set a mysql admin password. Make sure to take care of that and remember that password.

Once you have the LAMP stack ready, there are a few other dependencies that must be installed. Go back to your terminal window and issue the following command:

sudo apt-get install php-snmp snmp-mibs-downloader php-curl php-gettext graphviz

Allow that command to complete, and you’re ready to continue.

 

Download the software

The next step is to download the latest version of openDCIM—as of this writing, that version is 4.3. Go back to your terminal window and issue the command wget http://www.opendcim.org/packages/openDCIM-4.3.tar.gz. This will download the file into your current working directory. Unpack the file with the command tar xvzf openDCIM-4.3.tar.gz. Next, rename the newly created folder with the command sudo mv openDCIM-4.3 dcim. Finally, move that folder with the command sudo mv dcim /var/www/.

You’ll also need to change a permission or two with the command:

sudo chgrp -R www-data /var/www/dcim/pictures /var/www/dcim/drawings

 

Create the database

Next we create the database. Open the MySQL prompt with the commandmysql -u root -p and then, when prompted, enter the password you created during the LAMP installation. Issue the following commands:

create database dcim;

grant all on dcim.* to ‘dcim’@’localhost’ identified by ‘dcim’;

flush privileges;

exit;

 

Configure the database

Since we created the database dcim and used the password dcim, the built-in database configuration file will work without editing; all we have to do is rename the template with the command:

sudo cp /var/www/dcim/db.inc.php-dist /var/www/dcim/db.inc.php

 

 

Configure Apache

A virtual host must be configured for Apache. We’re going to use the default-ssl.conf configuration for openDCIM. Go to your terminal window and change to the /etc/apache/sites-available directory and open the default-ssl.conf file. To that file we’re going to first change the DocumentRoot variable to/var/www/dcim and then add the following below that line:

<Directory “/var/www/dcim”>
Options All
AllowOverride All
AuthType Basic
AuthName dcim
AuthUserFile /var/www/dcim/.htpassword
Require all granted
</Directory>

Save and close that file.

 

Set up user access

We also must secure openDCIM to restrict it to user access. We’ll do that with the help of htaccess. Create the file /var/www/dcim/.htaccess with the following contents:

AuthType Basic
AuthName “openDCIM”
AuthUserFile /var/www/opendcim.password
Require valid-user

Save that file and issue the command:

sudo htpasswd -cb /var/www/opendcim.password dcim dcim

Enable Apache modules and the site

The last thing to do (before pointing your browser to the installation) is to enable the necessary Apache modules and enable to the default-ssl site. You may find that some of these are already enabled. Issue the following commands:

sudo a2enmod ssl

sudo a2enmod rewrite

sudo a2ensite default-ssl

sudo service apache2 restart

You’re ready to install openDCIM

Installing openDCIM

You should point your browser to https://localhost/install.php (you can replace localhost with the IP address of your openDCIM server). You will be prompted for the directory credentials, which will be the same as used with htaccess. For that the username will be dcim and the password will be dcim. At this point it should pass the pre-flight checklist and take you directly to the department creation page (Figure A).

 

The very last step is to remove the /var/www/dcim/install.php file. Then point your browser to https://localhost (or the server’s IP address), and you’ll be taken to the main openDCIM site (Figure B).

 

The openDCIM main page

 

Ready to serve

At this point, openDCIM is ready to serve you. You’ll most likely find more than you expect from a free piece of software. Spend time getting up to speed with the various features, and you’ll be ready to keep better track of your various data centers, projects, infrastructure, and so much more…all from one centralized location.

Tips to Improve Data Center Management

Tips to Manage Data Center build for Enterprise Scale Software

 

Tips to Improve Data Center Management

 

Data center infrastructure is an integral part of modern businesses, data center infrastructure at facilities is becoming complex to manage with dynamic landscapes. Cooling, power, space, cable – everything must run efficiently for enabling business continuity, here are some tips data centers can follow for efficient Management

 

Deploy DCIM Tools

Few things are more critical to data center operations best practices than an effective data center infrastructure management (DCIM) platform. Managing a data center without DCIM software is nearly impossible, knowing what’s happening in the moment and even minor problems can be extremely disruptive because they take the facility by surprise.

Implementing DCIM tools provides complete visibility into the facility’s IT infrastructure, allowing data center personnel to monitor power usage, cooling needs, and traffic demands in real time. They can also analyze historical trends to optimize deployments for better performance. With a DCIM platform in place, IT support tickets can be resolved quickly and customers can communicate their deployment needs without having to go through a complicated request process.

Optimize Data Floor Space

Deployments matter, especially when it comes to issues of power distribution and rack density. Inefficient deployments can lead to problems like wasted energy going to underutilized servers or too much heat being generated for the cooling infrastructure to manage. It is no longer possible to manage temperatures on a facility level because rack densities may vary widely, creating hot spots in one zone while another zone is cooled below the desired temperature. The layout of the data floor can be subject to quite a bit of change, especially in a colocation facility where new servers are being deployed on a regular basis. Data centers need to be aware of how every piece of equipment on the data floor interacts with the others in order to optimize the environment efficiently. Installing a network of temperature sensors across the data center helps ensure that all equipment is operating within the recommended temperature range. By sensing temperatures at multiple locations the airflow and cooling capacity of the precision cooling units can be more precisely controlled, resulting in more efficient operation.

With power densities and energy costs both rising, the ability to monitor energy consumption is essential for effective data center management. To gain a comprehensive picture of data center power consumption, power should be monitored at the Uninterruptible Power Supply (UPS), the room Power Distribution Unit (PDU) and within the rack. Measurements taken at the UPS provide a base measure of data center energy consumption that can be used to calculate Power Usage Effectiveness (PUE) and identify energy consumption trends. Monitoring the room PDU prevents overload conditions at the PDU and helps ensure power is distributed evenly across the facility.

With increasing densities, a single rack can now support the same computing capacity that used to require an entire room. Visibility into conditions in the rack can help prevent many of the most common threats to rack-based equipment, including accidental or malicious tampering, and the presence of water, smoke and excess humidity or temperature. A rack monitoring unit can be configured to trigger alarms when rack doors are opened, when water or smoke is detected, or when temperature or humidity thresholds are exceeded, these can be connected to a central monitoring system for efficient monitoring.

In addition to constantly monitoring the data floor’s power, floor density and cooling needs, data center standards should approach every deployment with an eye toward efficiency and performance. The challenge is to deliver the optimal IT infrastructure setup for each customer without compromising performance elsewhere on the data floor. DCIM software, with its accumulated data on power and cooling usage, can help to ensure that every colocation customer is getting the most efficient deployment possible while also maintaining the overall health of the data center’s infrastructure.

 

Organize Cabling

Data centers necessarily use quite a lot of cable. Whether it’s bulky power cables or fiber-optic network cables, the facility must find ways to manage all that cabling effectively to make sure it all goes to the proper ports. While messy, unstructured cabling might be a viable solution for a very small on-premises data room in a private office, it’s completely unsuitable, and even dangerous, for even the smallest data centers. Cabling used in scalable infrastructure must be highly structured and organized if IT personnel are going to have any hope of managing it all.

Some of the Best practices are as follows

  • Run cables to the sides of server racks to ease adding or removing servers from the shelf.
  • Bundle cables together to conveniently connect the next piece of hardware down to the floor in data centers with elevated floors or up to the ceiling in data centers with wires that run through the ceiling.
  • Plan in advance for installing additional hardware. Disorganized cabling can interfere with air circulation and cooling patterns. Planning prevents damages due to quickly rising temperatures caused by restricted air movement.
  • Label cables securely on each end. This labeling process enables you to conveniently locate cables for testing or repair, install new equipment, or remove extra cables after equipment has been moved or upgraded, which saves time and money.
  • Color code cables for quick identification. Choose a color scheme that works for you and your team. It may be wise to put up a legend signifying the meaning of the colors of each cable. You may also color-code the cable’s destination, especially for larger installations across floors or offices.

Poorly organized cabling is not only messy and difficult to work with, but it can also create serious problems in a data center environment. Too many cables in a confined space can restrict air flow, putting more strain on both computing equipment and the facility’s cooling infrastructure. Inefficient cabling can also place unnecessary restrictions on deployments, which can make power distribution inefficiencies even worse.

 

Cycle Equipment

Computer technology advances quickly. While the typical lifecycle of a server is about three to five years, more efficient designs that allow data centers to maximize their space and power usage can often make a piece of equipment obsolete before its lifecycle would otherwise suggest. With many data center standards pushing toward increased virtualization, there is a powerful incentive to replace older, less efficient servers.

But data centers don’t just need to think about cycling computing equipment. Power distribution units (PDUs), air handlers, and uninterruptible power supply (UPS) batteries all have an expected lifespan. Replacing these infrastructure elements on a regular schedule or controlled monitoring cycle allows facilities to maximize the efficiency of their data center operations and deliver superior performance to colocation customers.

By implementing a number of best practices, data centers can significantly improve their operations in terms of efficiency and performance. Colocation customers and MSP partners stand to benefit immensely from these practices, reaping the benefits of reduced energy costs and a more robust, reliable IT infrastructure.

 

Perform Routine Maintenance

Regular or routine maintenance schedules will cut down on hardware failures, at the least allowing technicians to prepare for a problem before it happens, Routine maintenance includes checking operational hardware, identifying problematic equipment, performing regular data backups and monitoring outlying equipment. Preventative maintenance can mean the difference between minor issues and a complete hardware failure.

When implemented effectively, data center infrastructure management can deliver value not only to data center providers but also extend it to their customers. Not only will it enable improved operations, greater agility, and lowered risk, it also accelerates tasks to focus on enhanced data center systems and approaches.

Value of DCIM

Value of DCIM

 

Value of DCIM – Data Center Infrastructure Management

 

Data Center, in general people think it is a place where Data is being stored which is correct but there is a lot that an end user does not see or know is happening inside. A data center technician is responsible for tasks that are divided into several categories and assigned according to their skillset. Since the functionality of any application or program depends on how well a Data Center is being managed that is why its management becomes a critical part for the people involved in it. We can also refer Data Center as heart of the current IT Industry which is holding it together to provide services people need.

When it comes to know about the values that Data Center Infrastructure Management provides, we need to understand every aspect of the terms that is used to define the complete status of a Data Center starting from Deployment and till Maintenance. In other words, it can be categorized into 5 different area of focus:

  1. Capacity Planning
  2. Asset Management
  3. Power Monitoring
  4. Environmental Monitoring
  5. Change Management

 

Let us take an example here to understand all these terms in general: Assume a fully functional Data Center is going to get a new client which will be supported by the existing Datacenter. In this case before the client get onboarded the planning starts for the additional hardware/networking that will be required to accomplish this. This is when Asset management and Capacity Planning comes in handy. Since capacity planning is done to make sure to collect the data which is going to play an important role while buying the additional equipment. Data received from Capacity Planning is used to optimize the current IT Asset layout. When it comes to Asset Management it is a method to centrally manage the assets inside a data center including where a particular asset is located, how it is connected in relation to other assets, who owns it and maintenance coverage information.

Power Monitoring will define the total power requirement of this new client along with the current power capacity of the existing data center. In case the power requirement is more than the available value, new hardware will be added to support the new equipment. It gives a Data Center ability to investigate the entire power chain from generator down to a specific outlet on an intelligent cabinet PDU which will help in the diagnosis of any potential problems, balance power capacity across our facility, understand trend and receive alerts when problem arise with the help of proper sensors monitoring these values constantly.

Environmental monitoring enables us to capture data on Temperature, Pressure, Humidity and Air flow throughout the Data Center. Since these factors can impact the equipment severely, it is necessary to round the clock monitoring without failing. Now the importance of this factor is crucial since the behavior of the hardware will change and will interrupt the normal operations.

 

Change management creates an automated process for move, add and change work with real time tracking of work orders. This improves employee productivity, creates a repeatable streamlined process, and assist with compliance.

Selecting DCIM hardware and Software to meet our specific requirement can be challenging. Establishing a hardware roadmap and a business process is essential to achieve in a return on investment(ROI) with a decent solution. Once these are calculated, the requirements of different departments are addressed, and that the proper hardware foundation is in place to enable a smart, decent deployment.

DCIM also solve the challenges in project deployment facility assessment and controlled repeatability. Going through all these factors one can clearly see the importance of DCIM in current Information Technology requirement and how much values it brings to maintain a Data Center successfully and overcome all the challenges that may arise in the process.

Tips to Manage Data Center build for Enterprise-Scale Software

Tips to Manage Data Center build for Enterprise Scale Software

 

Tips to Manage Data Center build for Enterprise-Scale Software

We’re all trying to improve ourselves and our companies. Start-ups aim to become a Mid-Level Company and a Mid-Level Company to become a Major Company. They want to expand into a global company, and so on. It is important to evolve how we handle our current and new data centers as our businesses expand.

The pandemic slowed us down earlier but also created a huge demand for more remote servers and software to help better the situation. the companies have shifted their office work to the remote. This has become important due to the ongoing outbreak of Coronavirus and the spike in death tolls. According to Worldometer, the number of people infected with the Coronavirus has grown to more than 114 million, with more than 2.4 million deaths. This is the stat as of March 02, 2021. To cut the cost and to be able to grow without expanding their staff numbers, companies have huge pressure on how to cope up. This requires huge changes in data centers to make them efficient. Growing software demands even during pandemic demand us to be smart and create smart data centers. Here at Protected Harbor, we create data centers that can host multiple parts of single huge enterprise software with ease and almost no downtime.

Even maintenance of these data centers has minimum impact on this software because we make all the new changes in development and only shift anything to production once after deep testing. We perform this maintenance during the weekend preferably Sunday evening and that is usually done in just a few minutes.

We can categorize all these measures we take into these –

Analyze

First and foremost, perform a complete analysis of the budget, the requirement, and then what would be the most cost-efficient method to build the data center without compromising performance. Points to remember during the analysis. Disaster recovery means what downtime is expected and how it would affect the client experience. Depending on the business of the customers they can be categorized and assigned a data center customized and build just for them or for customers exactly like them along with them.

Plan

Once the analysis is done following the most appropriate approach for the customers, the next step is planning the layout and detailed configuration of the data center to be able to hold huge enterprise software. Planning includes size determination, the nomenclature of the servers, and Virtual machines inside it, disk and memory allocation, temperature to maintain, sensors to install, and settings.

Automation and AI

This is not a stage but a very important approach to maximize efficiency. Automation to perform tasks to avoid increasing staff to monitor various parts of the data center is critical for providing the best services to the customers without increasing overall cost. Artificial Intelligence on the other hand can be even more efficient as it can read the statistics and help configure the setting even better to match the needs. Hence, saving production cost of the data center while improving performance.

Climate Control using Sensors

Another important tip will be to control the temperature around and in the data center. The recommended temperature needs to be set at all times in the data center to avoid damages. If a single component gets damaged it can result in complete failure of the system and thus resulting in the customer not able to work. Reputation risk is huge in this case. This demands smart sensors be installed.

Go hybrid

The term “hybrid data center” refers to a data center with a combination of computing and storage environments. So the tip is to take advantage by combining on-premises data centers, private cloud, and/or public cloud platforms in a hybrid IT computing setting, enabling different businesses we run and our clients to adapt rapidly to evolving demands.

Maintain

This is the most important part of the process. Yes, the foundation of the center is important like analysis, planning, and following tips but managing the data center can result in it irreversible corruption, failures, and extended period of downtime. It is important to plan the maintenance process as well. Setting up calendar events for daily, weekly, and monthly maintenance of the data center is the key. Always keep an eye on the data and operations in both structures and places at all times.

Along with the stages and tips to managed an Enterprise software ready Datacenter, there are some other important tips to keep in mind for better results.

Use custom built in house software to manage rather than depending on licenses and vendors.

Licensing tools are mostly used by tech giants to collect data on device installation and use. They are one-time-only and do not allow for further refinement, with others only offering knowledge that is beneficial to the seller. They would not assist us in optimizing your licensing. To control data center licenses, you’ll need solutions that are tailored to and climate and challenge.

Partnering with Vendors

This is another great tip that can cut costs while providing possibilities to customize the tools based on our requirements. Following this multiple features can be integrated into a single appliance.

To summarize these are the steps to manage an enterprise-ready datacenter, research on the latest and greatest methods and efficient tools. Then consider ways to make the data center more energy and space-efficient, or how to make better use of current facilities. After that comes the detailed plan layout. Specific details about the location, allocation, and the complete blueprint of the data center need to be put together. Then execution and maintenance.

Data Center Infrastructure Management

Data Center Infrastructure Management

 

Data Center Infrastructure Management

 

Overview

Worldwide demand for new and more powerful IT-based applications, combined with the economic benefits of consolidation of physical assets, has led to an unprecedented expansion of data centers in both size and density. Limitations of space and power, along with the enormous complexity of managing a large data center, have given rise to a new category of tools with integrated processes – Data Center Infrastructure Management (DCIM).

Once properly deployed, a comprehensive DCIM solution provides data center operations managers with clear visibility of all data center assets along with their connectivity and relationships to support infrastructure – networks, copper and fiber cable plants, power chains and cooling systems. DCIM tools provide data center operations managers with the ability to identify, locate, visualize and manage all physical data center assets, simply provision new equipment and confidently plan capacity for future growth and/or consolidation.

These tools can also help control energy costs and increase operational efficiency. Gartner predicts that DCIM tools will soon become the mainstream in data centers, growing from 1% penetration in 2010 to 60% in 2014. This document will discuss some important data center infrastructure management issues.

We’ll also take a look at how a DCIM product can provide data center managers with the insight, information and tools they need to simplify and streamline operations, automate data center asset management, optimize the use of all resources – system, space, power, cooling and staff – reduce costs, project data center capacities to support future requirements and even extend data center life.

 

Why Data Center Infrastructure Management?

The trend for consolidation and construction of ever-larger data centers has been basically driven by economy-of-scale benefits. This trend has been accelerated and facilitated by technological advances such as Web-based applications, system virtualization, more powerful servers delivered in a smaller footprint and an overabundance of low-cost bandwidth. Not many years ago, most computer sites were sufficiently small so that the local, dedicated IT and facilities staff could reasonably manage most everything with manual processes and tools such as spreadsheets and Visio diagrams. It has now become painfully clear that IT and facilities professionals need better tools and processes to effectively manage the enormous inventory of physical assets and the complexity of the modern data center infrastructure. Experience shows that once a data center approaches 50-75 racks, management via spreadsheets and Visio becomes unwieldy and ineffective. The outward expansion and increasing rack density of modern data centers have created serious space and energy consumption concerns, prompting both corporate and government regulatory attention and action. DC has forecasted that data center power and cooling costs will rise from $25 billion in 2015 to almost $45 billion in 2025. Moreover, in a recent Data Center Dynamics research study, U.S. and European data center managers stated that their three largest concerns were increasing rack densities, proper cooling and power consumption. Seemingly overnight, the need for data center infrastructure and asset management tools has now become an overwhelming, high-priority challenge for IT and facilities management. At the highest level, the enterprise data center should be organized and operated to deliver quality service reliably, securely and economically to support the corporate mission. However, the natural evolution of roles and responsibilities among three principal groups within the data center – facilities, networking and systems – has in itself made this objective less achievable. Responsibilities have historically been distributed based on specific expertise relating to the physical layers of the infrastructure:

  1. Facilities: Physical space, power and cooling
  2. Networking: Fiber optic and copper cable plants, LANs, SANs and WANs
  3. Systems: Mainframes, servers, virtual servers and storage

Clearly, one major challenge is bridging the responsibilities and activities among various data center functions to minimize the delays, waste and potential operational confusion that can easily arise due to each group’s well-defined, specific roles.

 

What Is Data Center Infrastructure Management?

Basic Data Center Infrastructure Management components and functions include:

  • A Single Repository: One accurate, authoritative database to house all data from across all data centers and sites of all physical assets, including data center layout, with detailed data for IT, power and HVAC equipment and end-to-end network and power cable connections.
  • Asset Discovery and Asset Tracking: Tools to capture assets, their details, relationships and interdependencies.
  • Visualization: Graphical visualization, tracking and management of all data center assets and their related physical and logical attributes – servers, structured cable plants, networks, power infrastructure and cooling equipment.
  • Provisioning New Equipment: Automated tools to support the prompt and reliable deployment of new systems and all their related physical and logical resources.
  • Real-Time Data Collection: Integration with real-time monitoring systems to collect actual power usage/environmental data to optimize capacity management, allowing review of real-time data vs. assumptions around nameplate data.
  • Process-Driven Structure: Change management workflow procedures to ensure complete and accurate adds, changes and moves
  • Capacity Planning: Capacity planning tools to determine requirements for the future floor and rack space, power, cooling expansion, what-if analysis and modeling.
  • Reporting: Simplified reporting to set operational goals, measure performance and drive improvement.
  • A Holistic Approach: Bridge across organizational domains – facilities, networking and systems, filling all functional gaps; used by all data center domains and groups regardless of hierarchy, including managers, system administrators and technicians.

A comprehensive Data Center Infrastructure Management solution will directly address the major issues of asset management, system provisioning, space and resource utilization and future capacity planning. Most importantly, it will provide an effective bridge to support the operational responsibilities and dependencies between facilities and IT personnel to eliminate the potential silos.

Once again your Data Center Infrastructure Management will prove invaluable by collecting, mining and analyzing actual historic operational data. Data Center Infrastructure Management reports, what-if analysis and modeling will help identify opportunities for operational improvement and cost reduction so you can confidently plan and execute data center changes.

What Functions Best? Virtualization vs. bare metal servers

What Performs Best

 

What Performs Best? Bare Metal Server vs Virtualization

 

Virtualization technology has become a ubiquitous, end-to-end technology for data centers, edge computing installations, networks, storage and even endpoint desktop systems. However, admins and decision-makers should remember that each virtualization technique differs from the others. Bare-metal virtualization is clearly the preeminent technology for many IT goals, but host machine hypervisor technology works better for certain virtualization tasks.

By installing a hypervisor to abstract software from the underlying physical hardware, IT admins can increase the use of computing resources while supporting greater workload flexibility and resilience. Take a fresh look at the two classic virtualization approaches and examine the current state of both technologies

 

What is bare-metal virtualization?

Bare-metal virtualization installs a Type 1 hypervisor — a software layer that handles virtualization tasks — directly onto the hardware before the system installing any other OSes, drivers, or applications. Common hypervisors include VMware ESXi and Microsoft Hyper-V. Admins often refer to bare-metal hypervisors as the OSes of virtualization, though hypervisors aren’t Operating Systems in the traditional sense.

Once admins install a bare-metal hypervisor, that hypervisor can discover and virtualize the system’s available CPU, memory and other resources. The hypervisor creates a virtual image of the system’s resources, which it can then provision to create independent VMs. VMs are essentially individual groups of resources that run OSes and applications. The hypervisor manages the connection and translation between physical and virtual resources, so VMs and the software that they run only use virtualized resources.

Since virtualized resources and physical resources are inherently bound to each other, virtual resources are finite. This means the number of VMs a bare-metal hypervisor can create is contingent upon available resources. For example, if a server has 24 CPU cores and the hypervisor translates those physical CPU cores into 24 vCPUs, you can create any mix of VMs that use up to that total amount of vCPUs — e.g., 24 VMs with one vCPU each, 12 VMs with two vCPUs each and so on. Though a system could potentially share additional resources to create more VMs — a process known as oversubscription — this practice can lead to undesirable consequences.

Once the hypervisor creates a VM, it can configure the VM by installing an OS such as Windows Server 2019 and an application such as a database. Consequently, the critical characteristic of a bare-metal hypervisor and its VMs is that every VM remains completely isolated and independent of every other VM. This means that no VM within a system shares resources with or even has awareness of any other VM on that system.

Because a VM runs within a system’s memory, admins can save a fully configured and functional VM to a disk or physical servers, where they can then back up and reload the VM onto the same or other servers in the future, or duplicate it to invoke multiple instances of the same VM on other servers in a system.

 

 

Advantages and disadvantages of bare-metal virtualization

Virtualization is a mature and reliable technology; VMs provide powerful isolation and mobility. With bare-metal virtualization, every VM is logically isolated from every other VM, even when those VMs coexist on the same hardware. A single VM can neither directly share data with or disrupt the operation of other VMs nor access the memory content or traffic of other VMs. In addition, a fault or failure in one VM does not disrupt the operation of other VMs. In fact, the only real way for one VM to interact with another VM is to exchange traffic through the network as if each VM is its own separate server.

Bare-metal virtualization also supports live VM migration, which enables VMs to move from one virtualized system to another without halting VM operations. Live migration enables admins to easily balance server workloads or offload VMs from a server that requires maintenance, upgrades or replacements. Live migration also increases efficiency compared to manually reinstalling applications and copying data sets.

However, the hypervisor itself poses a potential single point of failure (SPOF) for a virtualized system. But virtualization technology is so mature and stable that modern hypervisors, such as VMware ESXi 7, notoriously lack such flaws and attack vectors. If a VM fails, the cause probably lies in that VM’s OS or application, rather than in the hypervisor

 

What is hosted virtualization?

Hosted virtualization offers many of the same characteristics and behaviors as bare-metal virtualization. The difference comes from how the system installs the hypervisor. In a hosted environment, the system installs the host OS prior to installing a suitable hypervisor — such as VMware Workstation, KVM or Oracle Virtual Box — atop that OS.

Once the system installs a hosted hypervisor, the hypervisor operates much like a bare-metal hypervisor. It discovers and virtualizes resources and then provisions those virtualized resources to create VMs. The hosted hypervisor and the host OS manage the connection between physical and virtual resources so that VMs — and the software that runs within them — only use those virtualized resources.

However, with hosted virtualization, the system can’t virtualize resources for the host OS or any applications installed on it, because those resources are already in use. This means that a hosted hypervisor can only create as many VMs as there are available resources, minus the physical resources the host OS requires.

The VMs the hypervisor creates can each receive guest operating systems and applications. In addition, every VM created under a hosted hypervisor is isolated from every other VM. Similar to bare-metal virtualization, VMs in a hosted system run in memory and the system can save or load them as disk files to protect, restore or duplicate the VM as desired.

Hosted hypervisors are most commonly used in endpoint systems, such as laptop and desktop PCs, to run two or more desktop environments, each with potentially different OSes. This can benefit business activities such as software development.

In spite of this, organizations use hosted virtualization less often because the presence of a host OS offers no benefits in terms of virtualization or VM performance. The host OS imposes an unnecessary layer of translation between the VMs and the underlying hardware. Inserting a common OS also poses a SPOF for the entire computer, meaning a fault in the host OS affects the hosted hypervisor and all of its VMs.

Although hosted hypervisors have fallen by the wayside for many enterprise tasks, the technology has found new life in container-based virtualization. Containers are a form of virtualization that relies on a container engine, such as Docker, LXC or Apache Mesos, as a hosted hypervisor. The container engine creates and manages virtual instances — the containers — that share the services of a common host OS such as Linux.

The crucial difference between hosted VMs and containers is that the system isolates VMs from each other, while containers directly use the same underlying OS kernel. This enables containers to consume fewer system resources compared to VMs. Additionally, containers can start up much faster and exist in far greater numbers than VMs, enabling for greater dynamic scalability for workloads that rely on micro service-type software architectures, as well as important enterprise services such as network load balancers.