5 Ways to Increase your Data Center Uptime

5 Ways to Increase your Data Center Uptime

 

5 Ways to Increase your Data Center Uptime

 

A data center will not survive unless it can deliver an uptime of 99.9999%. Most of the customers are choosing the data center option to avoid any unexpected outage for them. Even a few seconds of downtime can have a huge impact on some customers. To avoid such types of issues there are several effective ways to increase data center uptime.

  • Eliminate single points of failure

Always use HA for Hardware (Routers, Switches, Servers, power, DNS, and ISP) and also setup HA for applications. If any one of the hardware devices or application fails, we can easily move to a second server or hardware so we can avoid any unexpected downtime.

  • Monitoring

The effective monitoring system will provide the status of each system and if anything goes wrong, we can easily failover to the second pair and then we can investigate faulty devices. This way datacenter Admin will be able to find any issues before the end-user report.

  • Updating and maintenance

Keep all systems up to date and keep maintenance for all your device to avoid any security breach in the operating system. Also, keep your applications up to date. Planned maintenance is better than any unexpected downtime. Also, test all applications in a test lab to avoid any application-related issues before implementing them in the production environment.

  • Ensure Automatic Failover

Automatic failover will always help any human errors like if we miss any notification in the monitoring system and that caused one of our application crash. Then if we have automatic failover, it will automatically move to available servers. Therefore, end-user will not notice any downtime for their end.

  • Provide Excellent Support

Always we need to take care of our customers well. We need to be available 24/7 to help customers. We need to provide solutions faster and quick way so customers will not lose their valuable time spending with IT-related stuff.

Virtualization vs cloud computing

Virtualization vs cloud computing

 

Virtualization vs cloud computing

Cloud computing and virtualization are both technologies that were developed to maximize the use of computing resources while reducing the cost of those resources. They are also mentioned frequently when discussing high availability and redundancy. While it is not uncommon to hear people discuss them interchangeably; they are very different approaches to solving the problem of maximizing the use of available resources. They differ in many ways and that also leads to some important considerations when selecting between the two.

Virtualization: More Servers on the Same Hardware

It used to be that if you needed more computing power for an application, you had to purchase additional hardware. Redundancy systems were based on having duplicate hardware sitting in standby mode in case something should fail. The problem was that as CPUs grew more powerful and had more than one core, a lot of computing resources were going unused. This obviously costs companies a great deal of money. Enter virtualization. Simply stated, virtualization is a technique that allows you to run more than one server on the same hardware. Typically, one server is the host server and controls the access to the physical server’s resources. One or more virtual servers then run within containers provided by the host server. The container is transparent to the virtual server so the operating system does not need to be aware of the virtual environment. This allows the server to be consolidated which reduces hardware costs. Less physical servers also mean less power which further reduces cost. Most virtualization systems allow the virtual servers to be easily moved from one physical host to another. This makes it very simple for system administrators to reconfigure the servers based on resource demand or to move a virtual server from a failing physical node. Virtualization helps reduce complexity by reducing the number of physical hosts but it still involves purchasing servers and software and maintaining your infrastructure. Its greatest benefit is reducing the cost of that infrastructure for companies by maximizing the usage of the physical resources.

Cloud Computing: Measured Resources, Pay for What You Use

While virtualization may be used to provide cloud computing, cloud computing is quite different from virtualization. Cloud computing may look like virtualization because it appears that your application is running on a virtual server detached from any reliance or connection to a single physical host. And they are similar in that fashion. However, cloud computing can be better described as a service where virtualization is part of physical infrastructure.

Cloud computing grew out of the concept of utility computing. Essentially, utility computing was the belief that computing resources and hardware would become a commodity to the point that companies would purchase computing resources from a central pool and pay only for the number of CPU cycles, RAM, storage and bandwidth that they used. These resources would be metered to allow pay for what you use model much like you buy electricity from the electric company. This is how it became known as utility computing. It is common for cloud computing to be distributed across many servers. This provides redundancy, high availability and even geographic redundancy. This also makes cloud computing very flexible.

It is easy to add resources to your application. You just use them, just like you just use the electricity when you need it. Cloud computing has been designed with scalability in mind. The biggest drawback of cloud computing is that, of course, you do not control the servers. Your data is out there in the cloud and you have to trust the provider that it is safe. Many cloud computing services offer SLAs that promise to deliver a level of service and safety but it is critical to read the fine print. A failure of the cloud service could result in a loss of your data.

A practical comparison (Virtualization vs CLOUD COMPUTING)

VIRTUALIZATION

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately.

CLOUD COMPUTING

Cloud computing is a set of principles and approaches to deliver compute, network, and storage infrastructure resources, services, platforms, and applications to users on-demand across any network. These infrastructure resources, services, and applications are sourced from clouds, which are pools of virtual resources orchestrated by management and automation software so they can be accessed by users on-demand through self-service portals supported by automatic scaling and dynamic resource allocation.

Evading Rise of Ransomware

Evading Rise of Ransomware

 

Evading Rise of Ransomware

Security can be termed as protection from unwanted harm or unwanted resources. Information security protects the data from unauthorized users or access. It can also be termed as an important asset for any organization which plays a vital role. In earlier days it was difficult to identify ransomware before it enters or attacks the user’s system. These attacks would damage the mail servers, databases, expert systems, and confidential systems. In this paper, we propose the analysis and detection of ransomware which will have a major impact on business continuity.

RANSOMWARE

Lately, with the extensive usage of the internet, cybercriminals are rapidly growing targeting naïve users thru threats and malware to generate a ransom. Currently, this ransomware has become the most agonizing malware. Ransomware comprises of two. They are locker ransomware and crypto-ransomware. Of them, crypto-ransomware is the most familiar type that aims to encrypt users‟ data and locker ransomware prevent the users from accessing their data by locking the system or device. Both types of a ransomware demand a ransom payable via electronic mode for restoring the access of the data and system. Locker ransomware claims fee from the victims in terms of fine for downloading illegal content as per their fake law enforcement notice. Crypto ransomware has a time limit that warns the victims to pay the ransom within the given time else the data will be lost forever.

Spreading of ransomware is possible by the following methods:

  1. Phishy e-mail messages with malicious file attachments;
  2. Software patches that download the threat into the victim’s machine whilst working online.

Spreading of Ransomware Attack

  1. Phishing emails: The most common way of spreading Ransomware is thru phishing emails or spam emails. These mails include a .exe file or an attachment, which when opened launches ransomware on the victim’s machine.
  2. Exploit kits: these are the compromised websites planned by the attackers for malicious use. These exploit kits search for vulnerable website visitors to download the ransomware onto their machine.

VULNERABILITY ASSESSMENT AND TOOLS

The vulnerability can be termed as unsafe or unauthorized access by an intruder into an unprotected or exposed network. Common vulnerabilities are worms, viruses, spyware applications, spam emails, etc. Vulnerability Assessment is the most important technique that is conducted to rate the spontaneous attacks or risks that occur in the system thereby affecting the business continuity of an organization. Vulnerability assessment has many steps such as

  1. Vulnerability analysis
  2. Scope of the vulnerability assessment
  3. Information gathering
  4. Vulnerability identification
  5. Information Analysis and
  6. Planning

Assessment Tools

Vulnerability assessment which is nothing but testing can be carried out by best-known tools which are called vulnerability assessment tools. These tools are used to mitigate the identified vulnerabilities such as investigating unethical access to copyrighted materials, policy violations of the organizations‟ etc. The red alert issue about the vulnerability assessment is that it warns us about the vulnerability before the system is compromised and helps us in avoiding or preventing the attack. These vulnerability assessment tools can also be categorized as proactive security measures of an organization. The major step of the vulnerability assessment is the accurate testing of a system. The major step of the vulnerability assessment is the accurate testing of a system. If overlooked, it might lead to either false positives or false negatives. False-positive can be presumed as quicksand where we can’t find what we are searching for. False-negative can be presumed as a black hole where we don’t know what we want to search for. False positives can be rated as a significant level in testing.

Common Vulnerability Assessment Tools

  • Vulnerabilities are the most crucial part of information systems. An error in configuration or violation of a policy might compromise a network in an organization. These attacks can be for personal gain or corporate gain.
  • Not only the local area networks but also the websites are also more susceptible to attacks where the systems can be exploited either by the insiders or outsiders of an organization.
  • Some of the very commonly used vulnerability assessment tools are listed below:
    • Wireshark
    • Nmap
    • Metasploit
    • OpenVAS
    • AirCrack

Limitations of Existing Vulnerability Assessment Tools

The concept of false positives is the dangerous and horrendous limitation of the existing vulnerability assessment tools. These false positives require lots of testing and study for assessing the nature of the errors that occurred, which is a very expensive and time taking process. All the identification-related information mostly leads to false positives.

Penetration Testing

  • Penetration Testing also called as Pen Test is an attempt to assess a malicious activity or any security breach by exploiting the vulnerabilities.
  • It includes the testing of the networks, security applications and processes that are involved in the network.
  • Penetration testing is done to improve the performance of the system by testing the system’s efficiency.

How the Shift to Virtualization will impact Data Center Infrastructure?

How the Shift to Virtualization

 

How the Shift to Virtualization will impact Data Center Infrastructure?

 

Virtualization is the process of creating software-based (virtual) versions of computers, Storage, Networking, Servers, or Applications. It is super important when it comes to building a cloud computing strategy. This can be achieved using HYPERVISOR which is software that runs above the physical server or host. What HYPERVISIOR does is pool the resources from the physical servers and allocates them to the virtual environment which can be accessed by anyone who has access to it located anywhere in the world with an active internet connection.

 

Virtualization can be categorized into two different types:

  1. Type 1: These are most frequently used which are directly installed on the top of the physical server. They are more secure, and latency is low which is highly required for best performance. Some commonly used examples are VMware ESXi, MS HYPER-V, KVM.
  2. Type 2: In this type, a layer of host OS exists between Physical Server and Hypervisor. Commonly they are referred to as Hosted Hypervisor.

Since clients nowadays do not want to host big equipment in their own office, they are likely to move towards the Virtualization in which a Managed IT Company like Protected Harbor will help them to prepare the virtual environment based on their needs and that too without any hassle. Data Center Infrastructure is expanding because of this and to keep the Data Center Scalable the best practices of the DCIM need to be performed.

Virtualization not only affects the size of the Data Centers, but it also involves everything that is located inside a Data Centers. Big Data Centers means it will need additional power units with redundancy, AC, etc. This also leads to the concept of interconnected Data Centers where one of them could be hosting certain parts of an application layer and another one hosting remaining. Virtualization gives the concept of cloud since Physical Servers and not visible to clients and they are still using their resources without being involved in the management of that equipment. One of the most important benefits of Virtualization is it gives the possibility to achieve the best Data Center Infrastructure Management Practice.

Data Center Infrastructure Management

Data Center Infrastructure Management

 

In today’s world, Data Centers are the backbone of all the technologies we use in our daily life starting from our Electronic Devices like Phones, PCs going through all the way to Software that makes our life easier. And to run everything without any glitches Data Center Infrastructure Management plays an important role.

DCIM includes all the basic steps of managing anything which consists of Deployment, Monitoring and Maintenance. For a company that wants their services with no downtime (99.99%), they always look for recent development in the technologies that will make their Data Centers Rock Solid. This is where Protected Harbor excels. We follow every single step in order to make our Data Centers equipped and updated with the latest development in the tech world.

Managing a Data Centers involves several people who are experts in their own department and they work as a team to reach the end goal. For example, A Network engineer will always make sure the networking equipment is functional and there are no anomalies. The Data Center Technician will be responsible for all other hardware that is deployed inside the Data Centers. In sort here are few things which should always be considered while Managing a Data Center:

  1. Who is responsible for the Equipment?
  2. Current Status of Equipment.
  3. Equipment physical location.
  4. When potential issues might occur.
  5. How the Equipment is interconnected?

Monitoring of a Data Center is as important as any other factor involves because it gives every perspective of the hardware and software and will send an alert in case of any event.

Power Backup and Air Conditioning are two vital resources to run a Data Center. Most people do not think of these two when they hear about Data Center. A power failure will bring it down and without a Power Backup, this is highly possible. Data Centers consist of expensive Power Backups which also have redundancy which plays an important role in case of a power failure. Data Center equipment generates massive heat and that’s when Air Conditioning comes into play. The temperature inside a Data Center should always remain within its limit. Few degrees (1-2) increase in temperature will put the hardware in jeopardy. In order to make sure all of these are functional the monitoring gives accurate data and actions are taken as per those events.

While deploying a Data Center, Scalability is always kept into account. A Scalable and Secure Data Center is always needed.

Crashes, Failures, and Outages are the biggest problem of bad management, and eliminating them effectively is the primary job of Data Center Infrastructure management. The end goal of DCIM is always to provide high availability and durability. An unexpected event can occur at any time and how effectively it’s recognized and resolved will change the availability percentage. Application Interface is one of the top priority which should always remain online and in order to keep it that way the best practices are followed. In order to deploy a Data Center, the first step that is carried out is planning. The plan gets an overview of the Assets that will be required to deploy and Manage. Planning also involves the people assigned to each task that will be carried out in the successful deployment of the Data Center.

 

Scalability of DATA CENTERS and Why it’s important?

Technology is being more advanced every single day and to keep up with its development the Data Centers should also be capable of all the changes that happen in Technologies. Scalability provides the concept of Data Centers which can be expanded based on the needs and its expansion will not affect any previously deployed equipment. Scalability is important because how fast a Data Center can grow depends on it and increasing demand does indicate the same.

DCIM involves Asset Management which is basically keeping checks on all the equipment deployed inside the Data Center when they will need replacement or maintenance. This also generates a report on the expenses involved. Since it’s been established that Data Centers has lots of equipment which mean in case of maintenance there may be times when the Hardware Company will also have to be involved to fix broken equipment.

In the end, DCIM can be categorized as the backbone of Data Centers which plays an important role in every aspect of a Tech Company, and using DCIM tools high availability can be achieved.

Top 10 Ransomware Attacks 2021

Top 10 Ransomware Attacks 2021

 

Top 10 Ransomware Attacks

 

Ransomware Definition

Ransomware is a type of malware (malicious software) that threatens to publish or prevent access to data or a computer system, typically by encrypting it. The victim is faced with the ultimatum of either paying a ransom or risking the publication or permanent loss of their data or access to their system. The ransom demand usually involves a deadline. If the victim doesn’t pay on time, the data is permanently lost, or the ransom is increased.

Attacks using ransomware are all too frequent these days. It has affected both large firms in North America and Europe. Cybercriminals will target any customer or company, and victims come from every sector of the economy.

The FBI and other government agencies, as does the No More Ransom Project, advise against paying the ransom to prevent the ransomware cycle because it doesn’t ensure retrieval of the encrypted data. If the ransomware is not removed from the system, 50% of the victims who pay the ransom will likely experience further attacks.

 

History and Future of Ransomware

According to Becker’s Hospital Review, the first known ransomware attack occurred in 1989 and targeted the healthcare industry. 28 years later, the healthcare industry remains a top target for ransomware attacks.

The first known attack was initiated in 1989 by Joseph Popp, Ph.D., an AIDS researcher, who attacked by distributing 20,000 floppy disks to AIDS researchers spanning more than 90 countries, claiming that the disks contained a program that analyzed an individual’s risk of acquiring AIDS through the use of a questionnaire.

However, the disk also contained a malware program that initially remained dormant in computers, only activating after a computer was powered on 90 times. After the 90-start threshold was reached, the malware displayed a message demanding a payment of $189 and another $378 for a software lease. This ransomware attack became known as the AIDS Trojan or the PC Cyborg.

There will be no end to ransomware anytime soon. Ransomware as a service raas attacks have skyrocketed in 2021 and will continue to rise. About 304.7 million ransomware attacks were attempted in the first half of 2021, and many attacks went unreported as per Ransomware statistics 2021.

A recent report by Tripwire supported the fact that ransomware will keep growing, and the post-ransomware costs will keep climbing significantly. There’s no denying the fact that Ransomware is being used as a weapon, and how ransomware spreads is no longer a mystery.

Modern-day attacks target operational technology, operating system, medical and healthcare services, third-party software, and IoT devices. Fortunately, organizations don’t have to be sitting ducks; they can minimize the risk of attacks by being proactive and having a reliable ransomware data recovery infrastructure.

Top Ransomware Attacks

 

1. Kia Motors

Kia Motors America (KMA) was hit by a ransomware attack in February that hit both internal and customer-facing systems, including mobile apps, payment services, phone services, and dealership systems. The hack also impacted customers’ IT systems that were required to deliver new vehicles.

DoppelPaymer was thought to be the ransomware family that hit Kia, and the threat actors claimed to have also targeted Kia’s parent business, Hyundai Motors America. Similar system failures were also experienced by Hyundai.

On the other hand, Kia and Hyundai denied being assaulted, a frequent approach victims use to protect their reputation and customer loyalty.

2. CD Projekt Red

In February 2021, a ransomware attack hit CD Projekt Red, a video game studio located in Poland, causing significant delays in developing their highly anticipated next release, Cyberpunk 2077. The threat actors apparently stole source codes for numerous of the company’s video games, including Cyberpunk 2077, Gwent, The Witcher 3, and an unpublished version of The Witcher 3.

According to CD Projekt Red, the unlawfully obtained material is currently being distributed online. Following the incident, the company installed many security measures, including new firewalls with anti-malware protection, a new remote-access solution, and a redesign of critical IT infrastructure, according to the company.

3. Acer

Acer, a Taiwanese computer manufacturer, was hit by the REvil ransomware outbreak in March. This attack was notable because it demanded a ransom of $50,000,000, the greatest known ransom to date.

According to Advanced Intelligence, the REvil gang targeted a Microsoft Exchange server on Acer’s domain before the attack, implying that the Microsoft Exchange vulnerability was weaponized.

4. DC Police Department

The Metropolitan Police Department in Washington, D.C., was hit by ransomware from the Babuk gang, a Russian ransomware syndicate. The police department refused to pay the $4 million demanded by the group in exchange for not exposing the agency’s information and encrypted data.

Internal material, including police officer disciplinary files and intelligence reports, was massively leaked due to the attack, resulting in a 250GB data breach. Experts said it was the worst ransomware attack on a police agency in the United States.

5. Colonial Pipeline

The Colonial Pipeline ransomware assault in 2021 was likely the most high-profile of the year. The Colonial Pipeline transports roughly half of the fuel on the East Coast. The ransomware attack was the most significant hack on oil infrastructure in US history.

On May 7, the DarkSide group infected the organization’s computerized pipeline management equipment with ransomware. DarkSide’s attack vector, according to Colonial Pipeline’s CEO, was a single hacked password for an active VPN account that was no longer in use. Because Colonial Pipeline did not use multi-factor authentication, attackers could access the company’s IT network and data more quickly.

6. Brenntag

In May, Brenntag, a German chemical distribution company, was also struck by a DarkSide ransomware attack around the same time as Colonial Pipeline. According to DarkSide, the hack targeted the company’s North American business and resulted in the theft of 150 GB of critical data.

They got access by buying stolen credentials, according to DarkSide affiliates. Threat actors frequently buy stolen credentials — such as Remote Desktop credentials — on the dark web, which is why multi-factor authentication and detecting unsafe RDP connections are critical.

The first demand from DarkSide was 133.65 Bitcoin, or nearly $7.5 million, which would have been the highest payment ever made. Brenntag reduced the ransom to $4.4 million through discussions, which they paid.

7. Ireland’s Health Service Executive (HSE)

In May 2021, a variation of Conti ransomware infected Ireland’s HSE, which provides healthcare and social services. The organization shut down all of its IT systems after the incident. Many health services in Ireland were impacted, including the processing of blood tests and diagnoses.

The firm refused to pay the $20 million ransom in Bitcoin because the Conti ransomware group provided the software decryption key for free. However, the Irish health service was still subjected to months of substantial disruption as it worked to repair 2,000 IT systems that had been infected by ransomware.

8. JBS

Also, in May 2021, JBS, the world’s largest meat processing plant, was hit by a ransomware attack that forced the company to stop the operation of all its beef plants in the U.S. and slow the production of pork and poultry. The cyberattack significantly impacted the food supply chain and highlighted the manufacturing and agricultural sectors’ vulnerability to disruptions of this nature.

The FBI identified the threat actors as the REvil ransomware-as-a-service operation. According to JBS, the threat actors targeted servers supporting North American and Australian IT systems. The company ultimately paid a ransom of $11 million to the Russian-based ransomware gang to prevent further disruption.

9. Kaseya

Kaseya, an IT services company for MSP and enterprise clients, was another victim of REvil ransomware — this time during the July 4th holiday weekend. Although only 1% of Kaseya’s customers were breached, an estimated 800 to 1500 small to mid-sized businesses were affected through their MSP. One of those businesses included 800 Coop stores, a Sweden-based supermarket chain that was forced to temporarily close due to an inability to open their cash registers.

The attackers identified a chain of vulnerabilities — ranging from improper authentication validation to SQL injection — in Kaseya’s on-premises VSA software, which organizations typically run in their DMZs. REvil then used MSP’s Remote Monitoring and Management (RMM) tools to push out the attack to all connected agents.

10. Accenture

The ransomware gang LockBit hit Accenture, the global tech consultancy, with an attack in August that resulted in a leak of over 2,000 stolen files. The slow leak suggests that Accenture did not pay the $50 million ransom.

According to CyberScoop, Accenture knew about the attack on July 30 but did not confirm the breach until August 11, after a CNBC reporter tweeted about it. CRN criticized the firm for its lack of transparency about the attack, saying that the incident was a “missed opportunity by an IT heavyweight” to help spread awareness about ransomware.

 

Bonus: CNA Financial (2021)

CNA Financial, the seventh largest commercial insurer in the United States, announced on March 23, 2021, that it had “experienced a sophisticated cybersecurity attack.” Phoenix Locker ransomware was used in the attack, which was carried out by a group called Phoenix.

CNA Financial paid $40 million in May 2021 to regain access to the data. While CNA has been tight-lipped about the specifics of the negotiation and sale, it claims that all of its systems have been fully restored since then.

 

Types of ransomware:

There are two main types of ransomware:

  1. Crypto Ransomware

    Crypto ransomware encrypts files on a computer so the user cannot access them.

  2. Locker Ransomware

    Does not encrypt files. Rather, it locks the victim out of their device, preventing them from using it. Once they are locked out, cybercriminals carrying out locker ransomware attack demands a ransom to unlock the device.

Now you understand what ransomware is and the two main types of ransomware that exist. Let’s explore 10 types of ransomware attacks to help you understand how different and dangerous each type can be.

  • Locky

    Locky is a type of ransomware that was first released in a 2016 attack by an organized group of hackers. With the ability to encrypt over 160 file types, Locky spreads by tricking victims to install it via fake emails with infected attachments. This method of transmission is called phishing, a form of social engineering. Locky targets a range of file types that are often used by designers, developers, engineers, and testers.

  • WannaCry

    WannaCry is a ransomware attack that spread across 150 countries in 2017. Designed to exploit a vulnerability in Windows, it was allegedly created by the United States National Security Agency and leaked by the Shadow Brokers group. WannaCry affected 230,000 computers globally. The attack hit a third of hospital trusts in the UK, costing the NHS an estimated £92 million. Users were locked out and a ransom was demanded in the form of Bitcoin. The attack highlighted the problematic use of outdated systems, leaving the vital health service vulnerable to attack. The global financial impact of WannaCry was substantial -the cybercrime caused an estimated $4 billion in financial losses worldwide.

  • Bad Rabbit

    Bad Rabbit is a 2017 ransomware attack that spread using a method called a ‘drive-by’ attack, where insecure websites are targeted and used to carry out an attack. During a drive-by ransomware attack, a user visits a legitimate website, not knowing that they have been compromised by a hacker. Drive-by attacks often require no action from the victim, beyond browsing the compromised page. However, in this case, they are infected when they click to install something that is malware in disguise. This element is known as a malware dropper. Bad Rabbit used a fake request to install Adobe Flash as a malware dropper to spread its infection.

  • Ryuk

    Its a ransomware, which spread in August 2018, disabled the Windows System Restore option, making it impossible to restore encrypted files without a backup. Ryuk also encrypted network drives. The effects were crippling, and many organizations targeted in the US paid the demanded ransoms. August 2018 reports estimated funds raised from the attack were over $640,000.

  • Troldesh

    The Troldesh ransomware attack happened in 2015 and was spread via spam emails with infected links or attachments. Interestingly, the Troldesh attackers communicated with victims directly over email to demand ransoms. The cybercriminals even negotiated discounts for victims with who they built a rapport with — a rare occurrence indeed. This tale is the exception, not the rule. It is never a good idea to negotiate with cybercriminals. Avoid paying the demanded ransom at all costs as doing so only encourages this form of cybercrime.

  • Jigsaw

    Jigsaw is a ransomware attack that started in 2016. This attack got its name as it featured an image of the puppet from the Saw film franchise. Jigsaw gradually deleted more of the victim’s files each hour that the ransom demand was left unpaid. The use of horror movie imagery in this attack caused victims additional distress.

  • CryptoLocker

    CryptoLocker is ransomware that was first seen in 2007 and spread through infected email attachments. Once on your computer, it searched for valuable files to encrypt and hold to ransom. Thought to have affected around 500,000 computers, law enforcement, and security companies eventually managed to seize a worldwide network of hijacked home computers that were being used to spread Cryptolocker. This allowed them to control part of the criminal network and grab the data as it was being sent, without the criminals knowing. This action later led to the development of an online portal where victims could get a key to unlock and release their data for free without paying the criminals.

  • Petya

    Petya (not to be confused with ExPetr) is a ransomware attack that first hit in 2016 and resurged in 2017 as GoldenEye. Rather than encrypting specific files, this vicious ransomware encrypts the victim’s entire hard drive. It does this by encrypting the primary file table, making accessing files on the disk impossible. Petya spread through HR departments via a fake job application email with an infected Dropbox link.

  • GoldenEye

    The resurgence of Petya, known as GoldenEye, led to a global ransomware attack that happened in 2017. Dubbed WannaCry’s ‘deadly sibling,’ GoldenEye hit over 2,000 targets, including prominent oil producers in Russia and several banks. Frighteningly, GoldenEye even forced workers at the Chernobyl nuclear plant to check radiation levels manually as they had been locked out of their Windows PCs.

  • GandCrab

    GandCrab is a rather unsavory famous ransomware attack that threatened to reveal the victim’s porn-watching habits. Claiming to have a high-jacked user’s webcam, GandCrab cybercriminals demanded a ransom, or otherwise, they would make the embarrassing footage public. After having first hit in January 2018, GandCrab evolved into multiple versions. As part of the No More Ransom Initiative, internet security providers and the police collaborated to develop a ransomware decryptor to rescue victims’ sensitive data from GandCrab.

How to Spot a Ransomware Email

You now know about the various types of ransomware attacks that have been perpetrated against individuals and businesses in recent years. Many of the victims of the ransomware attacks we’ve mentioned became infected after clicking on links in spam or phishing emails or opening malicious attachments.

So, how can you avoid being a victim of a ransomware assault if you receive a ransomware email? Checking the sender is the easiest approach to recognizing a ransomware email. Is it from a reliable source? Always be cautious if you receive an email from someone or a firm you don’t recognize.

Never open email attachments from senders you don’t trust, and never click on links in emails from untrustworthy sources. If the attachment asks you to activate macros, proceed with caution. This is a popular method of ransomware distribution.

 

Using a Ransomware Decryptor

Do not pay a ransom if you are the victim of a ransomware assault. Paying the ransom demanded by cybercriminals does not guarantee that your data will be returned. After all, these are crooks. It also strengthens the ransomware industry, increasing the likelihood of future assaults. You will be able to restore the data that is being held to ransom if it is backed up outside or in cloud storage.

 

Types of Ransomware Extensions

The ransomware includes a particular file extension, you can point it out with some of the extensions defined below

.ecc, .ezz, .exx, .zzz, .xyz, .aaa, .abc, .ccc, .vvv, .xxx, .ttt, .micro, .encrypted, .locked, .crypto, _crypt, .crinf, .r5a, .XRNT, .XTBL, .crypt, .R16M01D05, .pzdc, .good, .LOL!, .OMG!, .RDM, .RRK, .encryptedRSA, .crjoker, .EnCiPhErEd, .LeChiffre, .keybtc@inbox_com, .0x0, .bleep, .1999, .vault, .HA3, .toxcrypt, .magic, .SUPERCRYPT, .CTBL, .CTB2, .locky or 6-7 length extension consisting of random characters

Best Tips to Protect yourself from Ransomware

Best Tips to Protect yourself from Ransomware

 

Tips to Protect yourself against Ransomware attacks

It is becoming more difficult to prevent ransomware attacks, event large IT departments can have difficulty, just ask Sony, the City of Baltimore, or the City of Atlanta.

For the last 40 years, we have built networks and office systems with the concept of sharing data. Shared folders for example make it easy for users to exchange and edit documents, but also those shared folders are the target of Ransomware attacks.

Some tools can be added to reduce the likelihood of ransomware, but nothing can be purchased to “protect” a company.

The most effective protection for Ransomware starts with a network and desktop redesign followed by layers of security and isolated backups. The best approach is not to try to protect against Ransomware, it is to develop a plan that minimized the impact of an attack. Unfortunately, many of the steps listed below require a desktop or office changes and many organizations are unwilling to change.

tips to protect against ransomware

The Protected Harbor Difference

At Protected Harbor we will not onboard a client without making the changes needed to protect against Ransomware. We think a new reality is that only good network design and good governance can keep networks safe. Most small IT companies are ill-equipped to understand the depth of the risk, much less take the necessary steps to protect against Ransomware.

The end-user resistance to change combined with tight IT budgets and the concept that IT is low cost has created a climate of a one-stop drop-in application or solution to stop all IT problems. This approach will not work to stop Ransomware. In short at Protected Harbor we protect our clients through better design.

keep your business protected from ransomware

Ways to PROTECT YOUR SYSTEM FROM RANSOMWARE

Below are the steps we take to protect our clients and we recommend the steps are deployed by all organizations.

Desktop/Network & Backup Isolation

The first step in a new network design is to limit through segmentation the network. Desktops, Servers and the backup should all be on separated and isolated networks. Using this approach an infected desktop will not be able to access the backups and will not infect the backups.

Virtualization

Protected Harbor will accomplish desktop and network isolation using virtualization. Virtualization allows Protected Harbor to back up the entire desktop, not just shared folders, or databases, or scanned folders, but all folders. This means we can recover the entire office, and not pieces of the office.

Email & Web Filtering

Filtering of email and web content is an important part of the Protected Harbor Ransomware defense. Good email filtering should include pattern recognition. The initial Ransomware attacks follow a template and email filtering systems when properly configured either block or quarantine the attack.

Enable network monitoring

We monitor for inbound and outbound traffic, which allows us to react to attack patterns in addition to standard monitoring. Network monitors can alert and warn on unusual traffic, or traffic that is typical of an attack; for example, if certain information is transmitted out of the network that would trigger an alert. We protect our customers by constantly monitoring network traffic, especially activity to or from parts of the world that are high sources of attacks, for example, Russia or China. We also monitor and alert on traffic flow. Oftentimes, if an end-user connects an infected phone or laptop to the network, we will see a change in the traffic flow which will trigger an alert.

ransomware traffic monitoring
Above is a sample of our traffic monitoring.
ransomware network traffic monitoring

Tighten local server/desktop permissions

Our clients do not run their programs as Administrators. Enhancing the security drastically reduces a ransomware attack and virtually eliminates malware attacks. Enhanced security reduces what an attack can affect through better design.

Reduce the number of common shares folders

Typically, clients will have one or two shared folders that all users have access to. Ransomware attacks not only infect those shares but then use them to spread the attack to other non-infected systems. We work with clients to reduce or eliminate shared folders, increasing the protection through better design to ransomware.

Reduce public corporate contact information

Live email addresses should not be published on a website. If a website needs an email address, the published address shouldn’t use the same format as the internal address. If jsmith is the email prefix, as in jsmith@abc.com then for the website the published email should be jacksmith@abc.com. Additionally, sensors can be added to the content filter for petersmith@abc.com for example. This would mean the attacking IP (the one attempting to send email to petersmith@abc.com) is really a robot attacker; adding that IP to the block list would prevent all future attacks from occurring.

Parameter or Geo Blocking

For our clients we maintain enhanced network protection that includes active parameter checking and Geo-Blocking. For example, we check the address of inbound requests, and if the IP is from a blocked country, then the traffic is blocked even before it reaches the client’s network. Countries we routinely block are North Korea, Russia and countries are known for sending out Ransomware attacks. If access is needed from a blocked country, a simple support ticket resolves the issue.

Testing & Training

At Protected Harbor we perform routine simulated Ransomware attacks. These tests are productive at helping end user stay vigilant to attacks and the tests allow end users to be identified that might need some additional assistance to understand the importance of being careful with email.

What is a Ransomware attack?

What is a Ransomware attack

 

“We guarantee we can PROTECT YOU FROM RANSOMWARE!”

 

Any vendor that says that or implies that is lying. There is no one magic happy pill, service, or device to stop ransomware. When done right guarding against ransomware is a combination of multiple technologies, backups, education good layered network design and human intervention.

Protected Harbor is a unique vendor because we don’t resell other company services, we engineer our own solutions. That depth of knowledge is a foundational difference between us and anyone else. The depth of technical ability allows us to write this document and solve the problem at the core and not band-aid the problem as others do.

 

Ransomware Explained

Ransomware is malicious software that targets computer systems and locks down important data until a ransom is paid. Ransomware is an increasingly prevalent form of cyber-attack, which can cause serious disruption to businesses and individuals alike. It works by malicious actors encrypting a victim’s data and then demanding a ransom payment in order to restore access to it. Organizations must take active steps toward ransomware protection and prevention, as the costs associated with a successful attack can be substantial. Investing in robust IT security measures, such as antivirus software and regular backups, will significantly reduce the risk of becoming a target. Furthermore, ensuring employees have the necessary understanding of ransomware prevention techniques will help protect your organization from this form of cyber-attack.

 

What is a Ransomware attack?

Ransomware is the encryption of files, without knowing the password, and most of the time the encryption is self-executed for local files, network files and operating system files combined with Trojan installations to enable later additional data theft or additional attacks.

Most of us have used or made a password protected ZIP file before. ZIP files are a form of encrypted and compressed files. The encryption and compression process
works by mathematically removing the empty and repeated characters in the data using password. The mathematical formula uses the password as a seed and applies a
compression algorithm to the data, securing and reducing the data. Using this technique, a ZIP file is both secure, because without the password it can’t be decrypted and smaller in size.

A Ransomware attack at its core is where the organizations data files have been encrypted using a similar technique to a password protected ZIP file. Typically,
ransomware attacks encrypt one file at a time. Ransomware attacks can be devastating because the data once encrypted is not recoverable. Initially versions of ransomware attacks targeted local files on local computers, but more recent attacks have caused greater damage by targeting network folders and operating system files.
Once an operating system file is infected the server or PC will never work right and should be totally reformatted and recreated.

Ransomware attacks also attempt to install infected files, also called Trojans. The Trojans are used to later attack the computer or server again and or are used to
monitor the infected system to steal data. Some Trojans don’t directly attack but instead run in background monitoring and sending new data. This is what occurred at the Sony attack;  Modern cleaning tools like Malwarebytes do a good job at removing infected cookies and web attacks but do not clean operating system files very well, which is why we always recommend not cleaning a PC or Server but rebuilding it.

How does a Ransomware attack occur?

But how did it occur? How did it get in? Virtually all of the time the attack is self-started, meaning the attack was triggered by a trusting employee. Most Ransomware attacks start via email. An external email server or email account is compromised, and the compromised account is then used to send out infected emails.

Image is an example. The email itself it not infected. The email account is legitimate, and at the time the email server amegybank.com was not flagged as a spammer – meaning this email would have passed through most firewalls, filters and blocking services.

The infection is the attached HTML file. The attached HTML file is the payload. The HTML file will look to many anti-virus programs as a web cookie or bot, i.e. a
legitimate attachment. Bots or payloads can take many forms, Macros in Word, Excel or PDF files are typically used.

how ransomware occurs

A payload is a small piece of programming code designed to look like a legitimate web from a web site. Once the end-user clicks on the attachment the payload is activated. Once active the payload will download from a remote site the actual attack. The attack will be a larger program that is also designed to slip through firewalls and content filters, this program will start to encrypt files and also will look for links to remote data, either remote server (RDP for example) login information, web site links with stored passwords, FTP or STP file transfer links, virtually any form of data connection is attempted. The attack is designed to find as much data as is possible, the more data that is encrypted the more the infected company is willing to pay.

Information Technology IT Trends in 2021

Information Technology IT Trends in 2021

 

What are the new IT Trends in 2021?

 

We’ve been so deep in this pandemic that some of us have forgotten what life was like before it. Remember we used to get together for lunch, go to a ball game, celebrate holidays together, and not wear masks! 2021 will begin with more of the same of 2020 but will shift towards “normal”.

When is anyone’s guess? But what will happen with technology? It can be argued that without technology, the economy and education would have taken even a bigger hit than it did in 2020. Platforms like Zoom, Microsoft Teams, and Google Hangouts allowed us to work in a virtual world. Companies like Protected Harbor’s clients who were smart enough to set up a virtual desktop make the move to “Work From Home” seamlessly.
So when the work moves back to normal, what will technology look like? What trends will continue from 2020? What new trends will emerge?

Trend 1: Drug development revolution with advanced Covid-19 testing and vaccine development

 

Operation Warp Speed changed the way that drugs are developed, tested and trialed. Assuming the Pfizer and Moderna vaccines prove to be safe (and we feel strongly they will), the speed in which vaccines are brought to market will increase dramatically. Both Pfizer and Moderna developed mRNA vaccines, the first in human history! We expect more innovations throughout 2021.

Also, COVID self-tests kits are being developed all over the world. We expect this trend to continue and perhaps move to self-test kits for other diseases.

Trend 2: Continued expansion of remote working and video conferencing

This area was already gaining lots of traction going into 2020 and grew exponentially during the pandemic.

This area has seen rapid growth during the pandemic, and it will likely continue growing in 2021.  Many of our clients have realized they are just as productive with a remote work force as they were before.  Some of them have permanently moved to a “work from home” environment.

Zoom, which grew from a startup in 2011 to going public in 2019, became a household name during the pandemic. Other existing large corporate tools such as Cisco’s Webex, Microsoft’s TeamsGoogle HangoutsGoToMeeting, and Verizon’s BlueJeans are also providing state-of-the-art videoconferencing systems, facilitating remote work across the globe.

Many new ventures are emerging in the remote working sector. Startups BluescapeEloopsFigmaSlab, and Tandem have all provided visual collaboration platforms enabling teams to create and share content, interact, track projects, train employees, run virtual team-building activities, and more.

These tools also help distributed teams keep track of shared learning and documentation. Users can create a virtual office that replicates working together in person by letting colleagues communicate and collaborate with one another easily.

remote working from home

Trend 3: Contactless delivery and shipping remain as the new normal

Due to the pandemic, the US has seen a 20% increase in customers who prefer contactless delivery.  Companies who have led in this space are DoorDash, Postmates, Instacard, Grubhub and Uber Eats.  These companies will continue to flourish in 2021.  Trend # 10 (autonomous driving) may be combined with contactless delivery to offer a truly futuristic way of delivering goods and foods.

Information Technology IT Trends

Trend 4: Telehealth and telemedicine flourish

Telehealth visits have surged by 50 percent compared with pre-pandemic levels. IHS Technology predicted that 70 million Americans would use telehealth by 2020. Since then, Forrester Research predicted the number of U.S. virtual care visits will reach almost a billion early in 2021.

Teladoc Health, Amwell, Livongo Health, One Medical, and Humana are some of the public companies offering telehealth services to meet their current needs.

Startups are not far behind. Startups like MDLive, MeMD, iCliniq, K Health, 98point6, Sense.ly, and Eden Health have also contributed toward meeting the growing needs in 2020 and will continue offering creative solutions in 2021. Beyond telehealth, in 2021 we can expect to see health care advancements in biotech and A.I., as well as machine learning opportunities (example: Suki AI) to support diagnosis, admin work, and robotic health care.

In many ways, patients prefer Telehealth and virtual doctor’s appointments. There’s no more waiting forever in the waiting room, and the doctor simply video calls you when he’s ready.
As Telehealth grows in 2021, tech companies will need to ensure they are HIPAA compliant, and that videos are kept private, and free from hackers.

Telehealth and telemedicine flourish

Trend 5: Online education and e-learning as part of the educational system

Covid-19 fast-tracked the e-learning and online education industry. During this pandemic, 190 countries have enforced nationwide school closures at some point, affecting almost 1.6 billion people globally.

There is a major opportunity with schools, colleges, and even coaching centers conducting classes via videoconferencing. Many institutions have actually been recommended to pursue a portion of their curriculum online even after everything returns to normal.

The challenge in 2020 was the availability of high-speed internet, especially in low-income neighborhoods.  As the economy recovers in 2021, we expect more and more households will have this access.

Over time, we expect internet access to be considered just as critical as food, water and electricity.

Online education and e-learning as part of the educational system

Trend 6: Increased development of 5G infrastructure, new applications, and utilities

There is no doubt that demand for higher-speed internet and a shift toward well-connected homes, smart cities, and autonomous mobility have pushed the advancement of 5G-6G internet technology. In 2021, we will see new infrastructure and utility or application development updates both from the large corporations and startups.

Many telcos are on track to deliver 5G, with Australia having rolled it out before Covid-19. Verizon announced a huge expansion of its 5G network in October 2020, which will reach more than 200 million people. In China, 5G deployment has been happening rapidly. There are more than 380 operators currently investing in 5G. More than 35 countries have already launched commercial 5G services.

Startups like Movandi are working to help 5G transfer data at greater distances; startups including Novalume help municipalities manage their public lighting network and smart-city data through sensors. Nido Robotics is using drones to explore the seafloor.

Through 5G networks, these drones help navigate better and use IoT to help communicate with devices on board. Startups like Seadronix from South Korea use 5G to help power autonomous ships. The 5G networks enable devices to work together in real-time and help enable vessels to travel unmanned.

The development of 5G and 6G technology will drive smart-city projects globally and will support the autonomous mobility sector in 2021.

Trend 7: A.I., robotics, internet of things, and industrial automation grow rapidly

In 2021, we expect to see huge demand and rapid growth of artificial intelligence (A.I.) and industrial automation technology. As manufacturing and supply chains are returning to full operation, manpower shortages will become a serious issue. Automation, with the help of A.I., robotics, and the internet of things, will be a key alternative solution to operate manufacturing.

Some of the top technology-providing companies enabling industry automation with A.I. and robotics integration include:

UBTech Robotics (China), CloudMinds (U.S.), Bright Machines (U.S.), Roobo (China), Vicarious (U.S.), Preferred Networks (Japan), Fetch Robotics (U.S.), Covariant (U.S.), Locus Robotics (U.S.), Built Robotics (U.S.), Kindred Systems (Canada), and XYZ Robotics (China).

Also, as we discuss in Trend # 10 (autonomous driving), AI has played, and will continue to play, a key role in autonomous driving, as cars “learn” how humans react to certain road conditions.

Trend 8: Virtual reality (VR) and augmented reality (AR) technologies usage rises

Augmented reality and virtual reality have grown significantly in 2020. These immersive technologies are now part of everyday life, from entertainment to business. The arrival of Covid-19 has prompted this technology adoption as businesses turned to the remote work model, with communication and collaboration extending over to AR and VR.

The immersive technologies from AR and VR innovations enable an incredible source of transformation across all sectors. AR avatars, AR indoor navigation, remote assistance, integration of A.I. with AR and VR, mobility AR, AR cloud, virtual sports events, eye tracking, and facial expression recognition will see major traction in 2021. Adoption of AR and VR will accelerate with the growth of the 5G network and expanding internet bandwidth.

Companies like MicrosoftConsagousQuytechRealWorld OneChetuGramercy TechScantaIndiaNICGroove Jones, etc. will play a significant role in shaping our world in the near future, not only because of AR’s and VR’s various applications but also as the flag carrier of all virtualized technologies.

Trend 9: Continued growth in micromobility

While the micro-mobility market had seen a natural slowdown at the beginning of the Covid-19 spread, this sector has already recovered to the pre-Covid growth level. E-bikes and e-scooters’ usage is soaring since they are viewed as convenient transportation alternatives that also meet social distancing norms. Compared to the pre-Covid days, the micro-mobility market is expected to grow by 9 percent for private micro-mobility and by 12 percent for shared micro-mobility.

There are hundreds of miles of new bike lanes created in anticipation. Milan, Brussels, Seattle, Montreal, New York, and San Francisco have each introduced 20-plus miles of dedicated cycle paths. The U.K. government announced that diesel and petrol-fueled car sales will be banned after 2030, which has also driven interest in micro-mobility as one of the alternative options.

Startups are leading the innovation in micro-mobility. Bird, Lime, Dott, Skip, Tier, and Voi are key startups leading the global micro-mobility industry.

China has already seen several micro-mobility startups reach unicorn status, including Ofo, Mobike, and Hellobike.

 

Trend 10: Ongoing autonomous driving innovation

We will see major progress in autonomous driving technology during 2021.  Tesla has clearly led the way.  Tesla’s Autopilot not only offers lane centering and automatic lane changes, but, from this year, can also recognize speed signs and detect green lights.

Honda recently announced that it will mass-produce autonomous vehicles, which under certain conditions will not require any driver intervention.  Ford is also joining the race, anticipating an autonomous driving cars ridesharing service launch in 2021. The company could also make such vehicles available to certain buyers as early as 2026. Other automakers, including Mercedes-Benz, are also trying to integrate some degree of autonomous driving technology in their new models from 2021. GM intends to roll out its hands-free-driving Super Cruise feature to 22 vehicles by 2023.

The fierce market competition is also accelerating self-driving technology growth in other companies, including Uber, Lyft and Waymo. Billions of dollars have been spent in acquiring startups in this domain: GM acquired Cruise for $1 billion; Uber acquired Otto for $680 million; Ford acquired Argo AI for $1 billion; and Intel acquired Mobileye for $15.3 billion.

Looking ahead

Technology development in 2021 will be somewhat of a continuation of 2020, but the influence of Covid-19 will evolve during the year. Many of our new behaviors will become part of the new normal in 2021, helping drive major technological and business innovations.

Protected Harbor continues to monitor these new technologies and look to bring them to their clients if and when there is a business need.  For more information on Protected Harbor, please visit Protected Harbor

The Reasons Application Fails

reasons application fails

The Reasons Application Fails

 

5 REASONS APPLICATIONS FAIL

99.99% Uptime Is Essential

In today’s modern world of Tele-Medicine, application availability and uptime is more critical than ever.

Healthcare workers and patients are accessing applications at all times of the day and night. The days of “bringing the application down for maintenance” every night are over.

Add to this the fact that most healthcare companies are growing, which adds extra load to these already stressed applications.

EMR and other key applications need to be available virtually 100% of the time.

application error result business loss

How Much Does A Single Hour Of Downtime Cost?

According to an ITIC study this year, the average amount of a single hour of downtime is $100,000 or more. Since 2008, ITIC has sent out independent surveys that measure downtime costs. Findings included that a single hour of downtime has risen by 25%-30%. 33% of those enterprises reported that one hour of downtime costs their companies between $1-5 million. 98% of organizations say a single hour of downtime costs over $100,000. 81% of respondents indicated that 60 minutes of downtime costs their businesses over $300,000.

Protected Harbor has found the design of data centers plays an essential role in its ability to maintain application availability which translates into company credibility with clients, employees, and ultimately dollars gained or lost.

The purpose of this white paper is to outline the top five mistakes companies make when designing, building, and managing data centers.

cost of datacenter downtime

“It’s Much Harder To Manage A Data Center For A Growing Business Than One For A Stagnant Business.”

“This saying has stuck with me over the years. Most of the businesses my company supports are growing companies. They trust I can design, build and manage a data center that will develop with them, and not impede on their growth.

According to a recent article by a top data center management company, only 4% of data center failures are due to IT equipment failure. Only 4%! That leaves 96% of data center failures caused by things outside of your data center equipment, whether it be power failure, cyber-crime, human error, or water/heat.

What does this mean for you? Well, at the inception of designing your Data Center elements that may seem to be innocuous must be considered because these components could have a significant impact on how your data center functions – or doesn’t function. Regardless of whether you are building a data center or migrating, it is imperative that you avoid falling into the traps that have ensnared many before you.

Protected Harbor has enough experience with all the above issues to understand how crippling they can be for small, medium and large organizations. Data centers popularity has increased exponentially over the past decade, and for good reason. They enable a business to expand, while being cost effective and reliable. Recently, a client asked us to list the common mistakes companies make when designing, building and managing their data centers. When compiling this list, we break these mistakes into three major categories; People, Processes and Tools. If you are about to embark down the data center path, make sure you don’t tumble into these pitfalls and wind up in a state of confusion and chaos.”

“It’s much harder to manage a data center for a growing business than one for a stagnant business.
Richard Luna
– CEO, Protected Harbor

01

Five Mistakes Companies Make That Cause Applications To Fail

PEOPLE: Organizing IT Staff in Vertical Roles vs. Horizontal Roles
Human error accounts for almost one quarter of all data center outages

We believe this has a lot to do with how IT staff is organized at most companies. IT staffs will have DBA’s (both development and production), programmers specific to one system, networking experts, and storage experts, etc. This level of specialization can be a big problem.

In many organizations, managers develop elaborate handoff processes that are confusing and often not followed. The programmer hands off the work to the database expert, who then hands off to the storage person. Often, there is no manager, who understands the big picture, until you get to either an IT Director or the CIO, who is too senior and removed from details to provide real direction. IT staffs lose the ability to view the system horizontally (and holistically), to understand the big picture. Often, steps are missed, mistakes are made, and when the data center crashes, groups point fingers at other groups and the true cause of the outage is not determined, which means it could happen again.

We recommend assigning IT process owners, meaning – IT staff members who are responsible for managing IT processes. These individuals first document the process and then put in end-to-end controls to ensure those processes are followed.

02

Inadequate Redundancy

TOOLS: Power issues, including issues with the UPS or generator, and other environmental issues, account for over 45% of data center outages

The IT team may understand the need for redundancy but fail to carry it through the entire system. Often, they will ensure redundancy in one network layer (or portion of the system for communicating data). However, the operational stability of the data center requires that multiple networking layers be on-line, all the time.

In other words, each layer needs to be redundant. For hardware, that means two mirrored firewalls, two drops, and two core mirrored switches. For software, this means multiple servers performing the same function configured in a primary secondary or in a pool configuration. If a server fails, the workload is migrated or transferred to a redundant server. We allow for redundancy at every level.

03

System Software Not Directly Connected To The Firewalls

TOOLS: Cyber-crime accounts for over 20% of data center outages

Any data center needs to be worried about external vulnerability to attacks. Companies can buy a high-end firewall package that does advance monitoring. But what happens behind that firewall? Most companies fail to understand the importance of connecting software login to firewall activity. For example, if the organization has RDP servers that cannot determine a legitimate log-in from an invalid log-in, how do you block it? This isn’t done automatically, because many of the individual apps being used are customized.

The best approach to this problem is to avoid it—design the system the right way, at its inception. For example, deploying a module that after three failed login attempts into a particular app blocks that IP address right at the firewall.

04

Data System Growth Not Sufficiently Considered In Budgeting

PROCESS: Many data centers crash because the data center environment was designed and built for a smaller organization, and cannot handle the increase in load due to company growth.

Many industries and companies see periods of rapid growth, and try to do their best to predict how that might affect operational needs, like sales, marketing, and manufacturing. However, IT often gets left behind in the budgeting dust, and the result is underfunding and an inability of the IT systems or data center capabilities to match the expectations of the rest of the organization.

Typically, this underfunding leads to attempts to cannibalize equipment, exceed their recommended capacities, and go beyond their expected lifespans. It often causes IT staff to find quick fixes to keep the data center operational. Regarding these quick fixes, we often observe a related error: The IT staff forget to remove the bandages that got them past isolated problems. This results in a lost opportunity to go back and properly resolve the underlying problem. There’s just no resources available to do it.

We recommend the IT leader work closely with his/her company’s leadership team to understand business trends, and works with IT experts to design a data center environment that can grow with the organization. Just like leaders of other departments, the IT leader needs to outline key IT investments that will be needed if the company grows. If a company’s core competence is healthcare, they may not want to be in the data center management business.

05

Not Having Clearly Written Procedures, Designated Lines Of Authority, And As A Result, Accountability

PROCESS:  When completing a new deployment, the people who understand the system and the way it was designed should compile the procedural manual for how to handle isolated issues, maintenance, and system-wide failures. This should also include lines of authority, which defines areas of responsibility. Only once these are delineated, can one expect accountability of the individuals on the IT team. Too often, organizations are barely organized, and these vital documents do not exist (or staff are unaware of their existence).

We recommend that procedures are created, documented, and followed in a specified way, guiding appropriate deployment of IT assets. Clearly stated lines of authority are required to make it work.

We Are Here to Help!

If you are an IT executive, director or decision maker and are concerned you company is falling prey to any of the aforementioned problems, let Protected Harbor help you navigate through them by implementing a comprehensive, secure and durable strategy.

Protected Harbor is an MSP that helps organizations and businesses across the US address their current IT needs by building them secure, custom and protected long term solutions.  Our unique technology experts evaluate current systems, determine the deficiencies and design cost-effective options.  We assist all IT departments by increasing their security, durability and sustainability, thus freeing them up to concentrate on their daily workloads.  Protected Harbor stands tall in the face of cyberattacks,  human error, technical failure and compliance issues.  www.protectedharbor.com

IT help