Category: Business Tech

AWS global outage; disrupts services and aftermath

AWS global outage disrupts and aftermath

 

AWS global outage; disrupts services and aftermath

Facebook, Alexa, Reddit, Netflix, and more apps were affected by the AWS outage.

If you faced problems logging in to Amazon.com for shopping ahead of Christmas, you’re not alone. On Tuesday, December 7, large parts of the internet and apps reported disrupted services based on the AWS platform. Netflix, Alexa, Disney+, Reddit, and IMDB are some of the services reported downtime.

UPDATE: 19:35 EST/16:35 PST, The official Amazon Web Services dashboard published the following affirmation. ” With the network device problems resolved, we are now operating towards the recovery of any impaired services. We will roll out additional updates for impaired services within the connected entry in the Service Health Dashboard.

AWS down

Users began reporting issues around 10:45 AM ET on Tuesday about the outage and took to Twitter and other social media platforms to discuss. More than 24,000 people reported cases with Amazon, which included Prime Video and other services, on DownDetector.com. The website collects outage reports from multiple sources, including user-submitted errors.

The AWS global outage recovery problems came from the US-EAST-1 AWS region in Virginia, so users elsewhere may not have noticed as many issues, and even if you were affected, you might have seen a slightly slower loading time while the network redirected your requests.

Peter DeSantis, AWS’ vice president of infrastructure, led a 600-person internal call about the then-ongoing outage. Some said it was likely an internal issue, and others pointed to more nefarious possibilities.
“We have mitigated the underlying issues that caused network devices in the US-EAST-1 Region to be impaired,” AWS said on its status page.

What caused the outage?

Engineers at Amazon Web Services (AWS), the enormous cloud computing provider in the US, are still unsure of AWS global outage causes on December 7. AWS does not list any issues on the status page currently. Previous outages have also not been reflected on the status page or even brought down the site entirely, so it is not unusual.
There is, however, a 500 Server error on the specific page for the us-east-1 AWS Management Console Home, instead of information about the Northern Virginia region.

A 500 server internal error means their server is trying to show the requested web page (the technical answer is delivered rather than the web page). But it can’t show the webpage because something within the server failed – for example, the storage failed, so the file is unavailable.

“Possible causes are internal routing problems within Amazon, a defective Amazon-wide update, an Amazon-wide misconfiguration. A defective API (application programming interface) or network device issue might also be a cause of the amazon console down,” said Richard Luna, CEO, Protected Harbor.

Amazon global outage comes just a few months after Meta Platforms, Inc. (FB) went offline due to network problems, affecting some of its most popular apps, including WhatsApp, Instagram, and Facebook Messenger.
The research firm Gartner Inc. estimates that major cloud platforms suffer significant outages once per quarter per year. Many people felt the AWS service disruption; however, since AWS controls about 90% of the cloud infrastructure market and many people continue to work and study from home during the pandemic, the outage was widely felt. Gartner vice president Sid Nag told The Wall Street Journal that these guys have become almost too big to fail. Our day-to-day lives rely heavily on cloud computing services.

 

Hasn’t This Happened Before?

Yes, AWS downtime is not a new occurrence. The last major AWS global outage happened in November 2020. Numerous other disruptive and lengthy cloud service interruptions have involved various providers. In June, the behind-the-scenes content distributor Fastly experienced a failure that briefly took down dozens of major internet sites, including CNN, The New York Times, and Britain’s government home page. Another cloud service interruption that month affected provider Akamai during peak business hours in Asia.

In the October outage, Facebook — now known as Meta Platforms — blamed a “faulty configuration change” for an hours-long worldwide AWS downtime that took down Instagram and WhatsApp in addition to its titular platform.

 

Credible solutions

On Tuesday, the world received a reminder of just how much we rely on Amazon Web Services and AWS global outage recovery. A simple outage for a brief period disrupted the operations and services of millions of people. Amazon is in the monopoly and would never partner with another provider. So the simplest solution is to opt for a service provider who puts customers first.

Amazon, as big it is, is still just one location and provides a single server location to the clients. At its core, it is one batch of servers. Protected Harbor solves this problem by spreading the customers across multiple server locations, preventing a site-wide misconfiguration. We protect our clients by using various services; we expect one service to fail- that gives us time to resolve and repair the situation quickly.

We differentiate from other providers by being proactive and planning for failures like this. We do it all the time- partner with other providers to deliver unmatched services to the customers because their satisfaction comes first.

 

Key Takeaways:

  • An hours-long AWS outage crippled popular websites and disrupted smart devices, as well as creating delivery delays at Amazon warehouses.
  • Companies like Facebook, Netflix, Reddit, IMDB, Disney+, and more were affected by the outage.
  • Amazon stated that it “identified the root cause” but yet to reveal what precisely the root cause was?
  • AWS controls almost 90% of the cloud services market, and the outages are not uncommon.
  • Now is the time to choose the provider which satisfies you and your business needs.

Go complete risk-free

Protected Harbor is the underdog player in the market that exceeds the customer’s expectations. With its Datacenter and Managed IT services, it has stood the test of customers, and “Beyond expectations” is quoted by all customers. Best in segment cloud services with optimum IT support, safety, and security, it’s a no-brainer why organizations choose to stay with us. This way to the crème de la crème.

SaaS vs DaaS

 

SaaS vs Daas

 

Learn the Fundamentals

After the inception of the cloud in the world of technology in 2006, we saw a rise in the number of providers delivering ascendable, on-demand customizable applications for personal and professional needs. Identified nowadays as cloud computing, in most basic terms it is the delivery of IT services through the Internet including software, servers, networking, and data storage. These service providers differentiated themselves according to the kinds of services they offered, such as:

  • Software as a Service (SaaS)
  • Desktop as a Service (DaaS)
  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)

Cloud computing enabled an easily customizable model of strong computing power, lower service prices, larger accessibility, and convenience, in addition to the newest IT security. This motivated a large number of small and medium-sized firms to begin using cloud-based apps to perform specific tasks in their businesses.
The Cloud computing world can be a confusing place for a business, should they use DAAS, SAAS, PAAS, or something else.  As a first step we will explain each service and what is it best used for.

 

SaaSsaas vs daas

SaaS or Software as a service is actually a cloud-based version of one piece of a software package (or a software package) that’s delivered to final users via the Internet. The final user or consumer does not own the app, also it is not stored on the user’s device. The consumer gets access to the application via a subscription model and generally pays for the licensing of the application.

SaaS software is simple to manage and can be used as long as one has a device with an active internet connection. One benefit is that end-users on SaaS platform do not have to worry about the frequent upgrades to the program as this is handled by the cloud hosting service provider.

 

DaaS

DaaS or Desktop as a service, is a subscription model service that enables businesses with efficient virtual desktops or RDP (Remote Desktop Protocol). Licensed users will have access to their own applications and files anyplace and at any time. Nearly any application you are already using or intend to use can be integrated into a DaaS model. DaaS provides you any level of flexibility your little, medium, or enterprise-level business requires while still permitting you to take care of the management of your information and desktop.

In the DaaS model, the service provider is accountable for the storage, backup, and security of the information. Only a thin client is required to get the service. These clients are the end-user computer terminal only used to provide a graphical user interface to the user. Subscriber hardware cost is a minimum and accessing the virtual desktop is possible at the user’s location, device, and network.F

PaaS

Platform as a service is an application platform where a third-party provider allows a customer to use the hardware and software tools over the internet without the hassle of building and maintaining the infrastructure required to develop the application.

 

IaaS

Infrastructure as a service is a cloud computing service where the infrastructure is hosted by enterprises on a public and private cloud and is rented or leased saving the maintaining and operating costs of the servers.

 

DaaS vs SaaS: The Key Differences

SaaS and DaaS differences: They are both applications of cloud computing but they have their fundamental differences. In simple words, the SaaS platform focuses on making software applications available over the internet, while Desktop as a Service enables the whole desktop experience by integrating several applications and the required data to the subscriber. DaaS users only need a thin client to enjoy the services, while SaaS companies provide the services through a fat client. SaaS software users need to store and retrieve the data produced by the application themselves but DaaS users don’t have to worry about the data as the service provides is responsible for the storage/ backup of the data.

You’ll find few who will disagree that ease of use is a reason why “Software as a Service” is a staple of businesses and has risen to popularity among enterprises both large and small. As for convenience, the rollout is more effortless than that of a DaaS situation. SaaS is the more versatile option of the two, and best of all, there are very affordable options if you’re trying to pinch those pennies as a smaller entity.
One of the key components of utilizing DaaS is security, closely followed by efficiency. From a security standpoint, since information is housed in a data center it helps lend itself to increased and more reliable security, removing all the risk that comes along with data being hosted on devices themselves.

Working
Managed DaaS provides virtual desktops for managing applications and associated data, with user data copied to and from the virtual desktop at log in and log out. SaaS delivers web-based software accessible via internet and browser, with backend operations and databases managed in the cloud.

Control
DaaS offers a complete desktop experience and allows users to store information within their own data center, providing full control. SaaS, however, follows a “one-to-many” model, offering access to specific applications shared across multiple clients, without a full desktop environment.

Interoperability
DaaS virtualizes the entire desktop, enabling smooth application integration. SaaS applications can also be integrated but may face challenges due to their hosting location and delivery method.

Mobility
DaaS is typically used with a PC and full-size screen but can be accessed from mobile devices. SaaS applications are designed to work well on both PCs and mobile devices like smartphones and tablets.

Ideal Use Cases
DaaS is ideal for resource-limited businesses seeking cloud solutions. SaaS suits businesses needing access to individual applications from any device, without hardware updates.

Understanding the differences between SaaS and DaaS for business helps in choosing the right cloud service for specific needs.

 

Ideal Use Cases: SaaS vs DaaS

Criteria DaaS (Desktop as a Service) SaaS (Software as a Service)
Ideal Use Case Best for businesses with limited resources looking to utilize cloud computing solutions and virtual desktop infrastructure. Suitable for businesses needing access to individual applications across devices without the need for hardware upgrades.
Service Provided Delivers a full virtual desktop infrastructure as a service. Delivers individual software applications via the Data as a Service platform.
Type of Service Offers virtual desktops and applications. Operates through web-based applications.
Management The DaaS provider handles upgrades, critical management tasks, and backups. All backups and critical computations are managed by the SaaS provider in the cloud.
Best For Ideal for users needing high-computation virtual desktops in remote areas, such as healthcare SaaS solutions for remote care providers. Perfect for businesses avoiding hardware investments for specific software.
Ownership Desktop applications are installed on the virtual desktops of the service provider. The software is owned and managed by the service provider.
Application Integration Applications can be seamlessly integrated into the DaaS model. Integration of applications in a SaaS model can sometimes be challenging.

 

Which one’s for you?

So, you’re in all probability wondering: Should your company adopt SaaS or DaaS? Our question is why not use both? It is correct that the cloud-based SaaS business model offers the flexibility to use their features while not needing to host the applications, However, the DaaS model has its own advantages. The reality is most businesses need a hybrid solution that utilizes the capabilities of both SaaS and DaaS. Using both the services allows them to access the functionality they need to be efficient while maintaining the ease and security of having all their business and applications on one dashboard with a single sign-on, equipping staff auditing capabilities.

Ultimately, the decision to adopt SaaS or Desktop as a Service depends on your company’s specific needs and resources. It’s important to weigh the benefits and drawbacks of each option and consider factors such as cost, security, and compatibility with existing systems. It may also be helpful to consult with a technology professional or service provider to determine the best option for your company.

 

Some additional benefits of using both SaaS and DaaS:

  • Best of cloud computing world: SaaS enables dependable cloud applications, DaaS delivers full client desktop and application experience. Users lose none of the features and functionality, Dedicated servers for cloud hosting is an add-on.
  • Application Integration: DaaS adds another layer to the flexibility by allowing users to integrate a large number of applications into a virtual desktop.
  • Customization and Flexibility: The users can customize the application according to their requirements and the flexibility to use the applications from any device anywhere is the top feature in cloud models.
  • Security and Control: DaaS permits users the choice of storing all application information, user data, etc. at their own data center, giving them full control.

 

Migrating your business to a DaaS or SaaS platform

Since every service provider has its own set of processes to migrate the existing businesses to a cloud platform. We cant represent everyone but generally, it’s a reasonably simple process to switch over to a cloud environment.

Contact Protected Harbor for a customized technology improvement plan that includes technologies like Protected Desktop, a DaaS service for the smaller entities which delivers the best of the solutions and aspects of Protected Harbor including 24×7 support, security, monitoring, backups, Application Outage Avoidance, and more. Similarly a Protected Full Service for the larger entities enabling remote cloud access and covering all IT costs. No two TIPs are the same as they are designed specifically for each client’s business needs, we believe that technology should help clients, not force clients to change how they work.

What Functions Best? Virtualization vs. bare metal servers

What Performs Best

 

What Performs Best? Bare Metal Server vs Virtualization

 

Virtualization technology has become a ubiquitous, end-to-end technology for data centers, edge computing installations, networks, storage and even endpoint desktop systems. However, admins and decision-makers should remember that each virtualization technique differs from the others. Bare-metal virtualization is clearly the preeminent technology for many IT goals, but host machine hypervisor technology works better for certain virtualization tasks.

By installing a hypervisor to abstract software from the underlying physical hardware, IT admins can increase the use of computing resources while supporting greater workload flexibility and resilience. Take a fresh look at the two classic virtualization approaches and examine the current state of both technologies

 

What is bare-metal virtualization?

Bare-metal virtualization installs a Type 1 hypervisor — a software layer that handles virtualization tasks — directly onto the hardware before the system installing any other OSes, drivers, or applications. Common hypervisors include VMware ESXi and Microsoft Hyper-V. Admins often refer to bare-metal hypervisors as the OSes of virtualization, though hypervisors aren’t Operating Systems in the traditional sense.

Once admins install a bare-metal hypervisor, that hypervisor can discover and virtualize the system’s available CPU, memory and other resources. The hypervisor creates a virtual image of the system’s resources, which it can then provision to create independent VMs. VMs are essentially individual groups of resources that run OSes and applications. The hypervisor manages the connection and translation between physical and virtual resources, so VMs and the software that they run only use virtualized resources.

Since virtualized resources and physical resources are inherently bound to each other, virtual resources are finite. This means the number of VMs a bare-metal hypervisor can create is contingent upon available resources. For example, if a server has 24 CPU cores and the hypervisor translates those physical CPU cores into 24 vCPUs, you can create any mix of VMs that use up to that total amount of vCPUs — e.g., 24 VMs with one vCPU each, 12 VMs with two vCPUs each and so on. Though a system could potentially share additional resources to create more VMs — a process known as oversubscription — this practice can lead to undesirable consequences.

Once the hypervisor creates a VM, it can configure the VM by installing an OS such as Windows Server 2019 and an application such as a database. Consequently, the critical characteristic of a bare-metal hypervisor and its VMs is that every VM remains completely isolated and independent of every other VM. This means that no VM within a system shares resources with or even has awareness of any other VM on that system.

Because a VM runs within a system’s memory, admins can save a fully configured and functional VM to a disk or physical servers, where they can then back up and reload the VM onto the same or other servers in the future, or duplicate it to invoke multiple instances of the same VM on other servers in a system.

 

 

Advantages and disadvantages of bare-metal virtualization

Virtualization is a mature and reliable technology; VMs provide powerful isolation and mobility. With bare-metal virtualization, every VM is logically isolated from every other VM, even when those VMs coexist on the same hardware. A single VM can neither directly share data with or disrupt the operation of other VMs nor access the memory content or traffic of other VMs. In addition, a fault or failure in one VM does not disrupt the operation of other VMs. In fact, the only real way for one VM to interact with another VM is to exchange traffic through the network as if each VM is its own separate server.

Bare-metal virtualization also supports live VM migration, which enables VMs to move from one virtualized system to another without halting VM operations. Live migration enables admins to easily balance server workloads or offload VMs from a server that requires maintenance, upgrades or replacements. Live migration also increases efficiency compared to manually reinstalling applications and copying data sets.

However, the hypervisor itself poses a potential single point of failure (SPOF) for a virtualized system. But virtualization technology is so mature and stable that modern hypervisors, such as VMware ESXi 7, notoriously lack such flaws and attack vectors. If a VM fails, the cause probably lies in that VM’s OS or application, rather than in the hypervisor

 

What is hosted virtualization?

Hosted virtualization offers many of the same characteristics and behaviors as bare-metal virtualization. The difference comes from how the system installs the hypervisor. In a hosted environment, the system installs the host OS prior to installing a suitable hypervisor — such as VMware Workstation, KVM or Oracle Virtual Box — atop that OS.

Once the system installs a hosted hypervisor, the hypervisor operates much like a bare-metal hypervisor. It discovers and virtualizes resources and then provisions those virtualized resources to create VMs. The hosted hypervisor and the host OS manage the connection between physical and virtual resources so that VMs — and the software that runs within them — only use those virtualized resources.

However, with hosted virtualization, the system can’t virtualize resources for the host OS or any applications installed on it, because those resources are already in use. This means that a hosted hypervisor can only create as many VMs as there are available resources, minus the physical resources the host OS requires.

The VMs the hypervisor creates can each receive guest operating systems and applications. In addition, every VM created under a hosted hypervisor is isolated from every other VM. Similar to bare-metal virtualization, VMs in a hosted system run in memory and the system can save or load them as disk files to protect, restore or duplicate the VM as desired.

Hosted hypervisors are most commonly used in endpoint systems, such as laptop and desktop PCs, to run two or more desktop environments, each with potentially different OSes. This can benefit business activities such as software development.

In spite of this, organizations use hosted virtualization less often because the presence of a host OS offers no benefits in terms of virtualization or VM performance. The host OS imposes an unnecessary layer of translation between the VMs and the underlying hardware. Inserting a common OS also poses a SPOF for the entire computer, meaning a fault in the host OS affects the hosted hypervisor and all of its VMs.

Although hosted hypervisors have fallen by the wayside for many enterprise tasks, the technology has found new life in container-based virtualization. Containers are a form of virtualization that relies on a container engine, such as Docker, LXC or Apache Mesos, as a hosted hypervisor. The container engine creates and manages virtual instances — the containers — that share the services of a common host OS such as Linux.

The crucial difference between hosted VMs and containers is that the system isolates VMs from each other, while containers directly use the same underlying OS kernel. This enables containers to consume fewer system resources compared to VMs. Additionally, containers can start up much faster and exist in far greater numbers than VMs, enabling for greater dynamic scalability for workloads that rely on micro service-type software architectures, as well as important enterprise services such as network load balancers.

Virtualization vs cloud computing

Virtualization vs cloud computing

 

Virtualization vs cloud computing

Cloud computing and virtualization are both technologies that were developed to maximize the use of computing resources while reducing the cost of those resources. They are also mentioned frequently when discussing high availability and redundancy. While it is not uncommon to hear people discuss them interchangeably; they are very different approaches to solving the problem of maximizing the use of available resources. They differ in many ways and that also leads to some important considerations when selecting between the two.

Virtualization: More Servers on the Same Hardware

It used to be that if you needed more computing power for an application, you had to purchase additional hardware. Redundancy systems were based on having duplicate hardware sitting in standby mode in case something should fail. The problem was that as CPUs grew more powerful and had more than one core, a lot of computing resources were going unused. This obviously costs companies a great deal of money. Enter virtualization. Simply stated, virtualization is a technique that allows you to run more than one server on the same hardware. Typically, one server is the host server and controls the access to the physical server’s resources. One or more virtual servers then run within containers provided by the host server. The container is transparent to the virtual server so the operating system does not need to be aware of the virtual environment. This allows the server to be consolidated which reduces hardware costs. Less physical servers also mean less power which further reduces cost. Most virtualization systems allow the virtual servers to be easily moved from one physical host to another. This makes it very simple for system administrators to reconfigure the servers based on resource demand or to move a virtual server from a failing physical node. Virtualization helps reduce complexity by reducing the number of physical hosts but it still involves purchasing servers and software and maintaining your infrastructure. Its greatest benefit is reducing the cost of that infrastructure for companies by maximizing the usage of the physical resources.

Cloud Computing: Measured Resources, Pay for What You Use

While virtualization may be used to provide cloud computing, cloud computing is quite different from virtualization. Cloud computing may look like virtualization because it appears that your application is running on a virtual server detached from any reliance or connection to a single physical host. And they are similar in that fashion. However, cloud computing can be better described as a service where virtualization is part of physical infrastructure.

Cloud computing grew out of the concept of utility computing. Essentially, utility computing was the belief that computing resources and hardware would become a commodity to the point that companies would purchase computing resources from a central pool and pay only for the number of CPU cycles, RAM, storage and bandwidth that they used. These resources would be metered to allow pay for what you use model much like you buy electricity from the electric company. This is how it became known as utility computing. It is common for cloud computing to be distributed across many servers. This provides redundancy, high availability and even geographic redundancy. This also makes cloud computing very flexible.

It is easy to add resources to your application. You just use them, just like you just use the electricity when you need it. Cloud computing has been designed with scalability in mind. The biggest drawback of cloud computing is that, of course, you do not control the servers. Your data is out there in the cloud and you have to trust the provider that it is safe. Many cloud computing services offer SLAs that promise to deliver a level of service and safety but it is critical to read the fine print. A failure of the cloud service could result in a loss of your data.

A practical comparison (Virtualization vs CLOUD COMPUTING)

VIRTUALIZATION

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately.

CLOUD COMPUTING

Cloud computing is a set of principles and approaches to deliver compute, network, and storage infrastructure resources, services, platforms, and applications to users on-demand across any network. These infrastructure resources, services, and applications are sourced from clouds, which are pools of virtual resources orchestrated by management and automation software so they can be accessed by users on-demand through self-service portals supported by automatic scaling and dynamic resource allocation.

Information Technology IT Trends in 2021

Information Technology IT Trends in 2021

 

What are the new IT Trends in 2021?

 

We’ve been so deep in this pandemic that some of us have forgotten what life was like before it. Remember we used to get together for lunch, go to a ball game, celebrate holidays together, and not wear masks! 2021 will begin with more of the same of 2020 but will shift towards “normal”.

When is anyone’s guess? But what will happen with technology? It can be argued that without technology, the economy and education would have taken even a bigger hit than it did in 2020. Platforms like Zoom, Microsoft Teams, and Google Hangouts allowed us to work in a virtual world. Companies like Protected Harbor’s clients who were smart enough to set up a virtual desktop make the move to “Work From Home” seamlessly.
So when the work moves back to normal, what will technology look like? What trends will continue from 2020? What new trends will emerge?

Trend 1: Drug development revolution with advanced Covid-19 testing and vaccine development

 

Operation Warp Speed changed the way that drugs are developed, tested and trialed. Assuming the Pfizer and Moderna vaccines prove to be safe (and we feel strongly they will), the speed in which vaccines are brought to market will increase dramatically. Both Pfizer and Moderna developed mRNA vaccines, the first in human history! We expect more innovations throughout 2021.

Also, COVID self-tests kits are being developed all over the world. We expect this trend to continue and perhaps move to self-test kits for other diseases.

Trend 2: Continued expansion of remote working and video conferencing

This area was already gaining lots of traction going into 2020 and grew exponentially during the pandemic.

This area has seen rapid growth during the pandemic, and it will likely continue growing in 2021.  Many of our clients have realized they are just as productive with a remote work force as they were before.  Some of them have permanently moved to a “work from home” environment.

Zoom, which grew from a startup in 2011 to going public in 2019, became a household name during the pandemic. Other existing large corporate tools such as Cisco’s Webex, Microsoft’s TeamsGoogle HangoutsGoToMeeting, and Verizon’s BlueJeans are also providing state-of-the-art videoconferencing systems, facilitating remote work across the globe.

Many new ventures are emerging in the remote working sector. Startups BluescapeEloopsFigmaSlab, and Tandem have all provided visual collaboration platforms enabling teams to create and share content, interact, track projects, train employees, run virtual team-building activities, and more.

These tools also help distributed teams keep track of shared learning and documentation. Users can create a virtual office that replicates working together in person by letting colleagues communicate and collaborate with one another easily.

remote working from home

Trend 3: Contactless delivery and shipping remain as the new normal

Due to the pandemic, the US has seen a 20% increase in customers who prefer contactless delivery.  Companies who have led in this space are DoorDash, Postmates, Instacard, Grubhub and Uber Eats.  These companies will continue to flourish in 2021.  Trend # 10 (autonomous driving) may be combined with contactless delivery to offer a truly futuristic way of delivering goods and foods.

Information Technology IT Trends

Trend 4: Telehealth and telemedicine flourish

Telehealth visits have surged by 50 percent compared with pre-pandemic levels. IHS Technology predicted that 70 million Americans would use telehealth by 2020. Since then, Forrester Research predicted the number of U.S. virtual care visits will reach almost a billion early in 2021.

Teladoc Health, Amwell, Livongo Health, One Medical, and Humana are some of the public companies offering telehealth services to meet their current needs.

Startups are not far behind. Startups like MDLive, MeMD, iCliniq, K Health, 98point6, Sense.ly, and Eden Health have also contributed toward meeting the growing needs in 2020 and will continue offering creative solutions in 2021. Beyond telehealth, in 2021 we can expect to see health care advancements in biotech and A.I., as well as machine learning opportunities (example: Suki AI) to support diagnosis, admin work, and robotic health care.

In many ways, patients prefer Telehealth and virtual doctor’s appointments. There’s no more waiting forever in the waiting room, and the doctor simply video calls you when he’s ready.
As Telehealth grows in 2021, tech companies will need to ensure they are HIPAA compliant, and that videos are kept private, and free from hackers.

Telehealth and telemedicine flourish

Trend 5: Online education and e-learning as part of the educational system

Covid-19 fast-tracked the e-learning and online education industry. During this pandemic, 190 countries have enforced nationwide school closures at some point, affecting almost 1.6 billion people globally.

There is a major opportunity with schools, colleges, and even coaching centers conducting classes via videoconferencing. Many institutions have actually been recommended to pursue a portion of their curriculum online even after everything returns to normal.

The challenge in 2020 was the availability of high-speed internet, especially in low-income neighborhoods.  As the economy recovers in 2021, we expect more and more households will have this access.

Over time, we expect internet access to be considered just as critical as food, water and electricity.

Online education and e-learning as part of the educational system

Trend 6: Increased development of 5G infrastructure, new applications, and utilities

There is no doubt that demand for higher-speed internet and a shift toward well-connected homes, smart cities, and autonomous mobility have pushed the advancement of 5G-6G internet technology. In 2021, we will see new infrastructure and utility or application development updates both from the large corporations and startups.

Many telcos are on track to deliver 5G, with Australia having rolled it out before Covid-19. Verizon announced a huge expansion of its 5G network in October 2020, which will reach more than 200 million people. In China, 5G deployment has been happening rapidly. There are more than 380 operators currently investing in 5G. More than 35 countries have already launched commercial 5G services.

Startups like Movandi are working to help 5G transfer data at greater distances; startups including Novalume help municipalities manage their public lighting network and smart-city data through sensors. Nido Robotics is using drones to explore the seafloor.

Through 5G networks, these drones help navigate better and use IoT to help communicate with devices on board. Startups like Seadronix from South Korea use 5G to help power autonomous ships. The 5G networks enable devices to work together in real-time and help enable vessels to travel unmanned.

The development of 5G and 6G technology will drive smart-city projects globally and will support the autonomous mobility sector in 2021.

Trend 7: A.I., robotics, internet of things, and industrial automation grow rapidly

In 2021, we expect to see huge demand and rapid growth of artificial intelligence (A.I.) and industrial automation technology. As manufacturing and supply chains are returning to full operation, manpower shortages will become a serious issue. Automation, with the help of A.I., robotics, and the internet of things, will be a key alternative solution to operate manufacturing.

Some of the top technology-providing companies enabling industry automation with A.I. and robotics integration include:

UBTech Robotics (China), CloudMinds (U.S.), Bright Machines (U.S.), Roobo (China), Vicarious (U.S.), Preferred Networks (Japan), Fetch Robotics (U.S.), Covariant (U.S.), Locus Robotics (U.S.), Built Robotics (U.S.), Kindred Systems (Canada), and XYZ Robotics (China).

Also, as we discuss in Trend # 10 (autonomous driving), AI has played, and will continue to play, a key role in autonomous driving, as cars “learn” how humans react to certain road conditions.

Trend 8: Virtual reality (VR) and augmented reality (AR) technologies usage rises

Augmented reality and virtual reality have grown significantly in 2020. These immersive technologies are now part of everyday life, from entertainment to business. The arrival of Covid-19 has prompted this technology adoption as businesses turned to the remote work model, with communication and collaboration extending over to AR and VR.

The immersive technologies from AR and VR innovations enable an incredible source of transformation across all sectors. AR avatars, AR indoor navigation, remote assistance, integration of A.I. with AR and VR, mobility AR, AR cloud, virtual sports events, eye tracking, and facial expression recognition will see major traction in 2021. Adoption of AR and VR will accelerate with the growth of the 5G network and expanding internet bandwidth.

Companies like MicrosoftConsagousQuytechRealWorld OneChetuGramercy TechScantaIndiaNICGroove Jones, etc. will play a significant role in shaping our world in the near future, not only because of AR’s and VR’s various applications but also as the flag carrier of all virtualized technologies.

Trend 9: Continued growth in micromobility

While the micro-mobility market had seen a natural slowdown at the beginning of the Covid-19 spread, this sector has already recovered to the pre-Covid growth level. E-bikes and e-scooters’ usage is soaring since they are viewed as convenient transportation alternatives that also meet social distancing norms. Compared to the pre-Covid days, the micro-mobility market is expected to grow by 9 percent for private micro-mobility and by 12 percent for shared micro-mobility.

There are hundreds of miles of new bike lanes created in anticipation. Milan, Brussels, Seattle, Montreal, New York, and San Francisco have each introduced 20-plus miles of dedicated cycle paths. The U.K. government announced that diesel and petrol-fueled car sales will be banned after 2030, which has also driven interest in micro-mobility as one of the alternative options.

Startups are leading the innovation in micro-mobility. Bird, Lime, Dott, Skip, Tier, and Voi are key startups leading the global micro-mobility industry.

China has already seen several micro-mobility startups reach unicorn status, including Ofo, Mobike, and Hellobike.

 

Trend 10: Ongoing autonomous driving innovation

We will see major progress in autonomous driving technology during 2021.  Tesla has clearly led the way.  Tesla’s Autopilot not only offers lane centering and automatic lane changes, but, from this year, can also recognize speed signs and detect green lights.

Honda recently announced that it will mass-produce autonomous vehicles, which under certain conditions will not require any driver intervention.  Ford is also joining the race, anticipating an autonomous driving cars ridesharing service launch in 2021. The company could also make such vehicles available to certain buyers as early as 2026. Other automakers, including Mercedes-Benz, are also trying to integrate some degree of autonomous driving technology in their new models from 2021. GM intends to roll out its hands-free-driving Super Cruise feature to 22 vehicles by 2023.

The fierce market competition is also accelerating self-driving technology growth in other companies, including Uber, Lyft and Waymo. Billions of dollars have been spent in acquiring startups in this domain: GM acquired Cruise for $1 billion; Uber acquired Otto for $680 million; Ford acquired Argo AI for $1 billion; and Intel acquired Mobileye for $15.3 billion.

Looking ahead

Technology development in 2021 will be somewhat of a continuation of 2020, but the influence of Covid-19 will evolve during the year. Many of our new behaviors will become part of the new normal in 2021, helping drive major technological and business innovations.

Protected Harbor continues to monitor these new technologies and look to bring them to their clients if and when there is a business need.  For more information on Protected Harbor, please visit Protected Harbor

Protected Harbor’s New PBX Phone System

Protected Harbor’s New PBX Phone System

 

Protected Harbor’s New PBX Phone System

 

Protected Harbor is proud to introduce a state of the art phone system 3CX.  It is available to all our current clients and anyone in the market looking to upgrade their current set up.  Protected Harbor partnered with 3CX, and provides system configuration and support.

The benefits of this new phone system are plentiful, starting with the end-user being in complete control.  This PBX can be installed on-premise and virtualized on a Linux or Windows platform.   It is easily set up via iOS and Android apps for remote work via QR code.  Staff can easily be added, and voicemail can be set up in minutes.  The user management is simple and easy which will save countless hours of work.

The biggest benefit perhaps, of our system is that it has a softphone so you can make and receive business calls from your PC, tablet or smartphone.  There is no need to be tied to a hard physical phone in your office.  It’s perfect for businesses that have employees working from home during COVID-19.

Moreover, many of the phone features that other vendors classify as add-ons, such as video conferencing, iOS and Android mobile apps or call center features, are included with Protected Harbor’s PBX. So, there are no hidden costs for the features you need.

Switching to this configuration makes complete sense when you compare pricing to that of other PBX vendors such as 8×8 and Avaya. Our system provides a complete unified communications solution that is easy to manage, flexible and affordable. Whether you are a small business or a large enterprise, you can save thousands and avoid all the hassle of purchasing additional extensions and add-ons as your business grows.

Think less about maintaining your PBX, and more about your business – call or email us today to find out more about Protected Harbor’s new phone system! www.protectedharbor.com

WHY IS 99.99% UPTIME IMPORTANT?

WHY IS 99.99% UPTIME IMPORTANT

 

WHY IS 99.99% UPTIME IMPORTANT?

 

Today, businesses of all sizes have grown more reliant on their technology and no business, no matter the size wants to see their systems or site offerings offline – even for a few minutes.  This is why uptime has become vital.  For many companies, uptime is not a preference, it’s a necessity.

Uptime is important because the cost and consequences of downtime can cripple a business, however, no business in any industry can guarantee absolute perfection. Even with tremendous precautions and redundancies in place, systems can fail.  Natural disasters or other mitigating factors out of our control that may require a quick re-boot can’t always be predicted or prevented.

In order to evade debilitating periods of downtime, businesses must employ the most current technologies, designed with uptime in mind, or utilize a managed service provider well versed in the latest technology and long term solutions.

It is no secret that businesses look for 99.99% uptime.  If this percentage seems unrealistic, the additional decimals make a huge difference. The reality is that .1% of downtime is an unacceptable percentage for most companies.  When businesses encounter downtime, they cannot provide services to their customers.  Customers have short memories and as a result, may be tempted to take their dollars elsewhere if they cannot get what they want in a timely manner.

Not only is losing customers disastrous, but productivity can suffer as well.  This is never a good combination.  The average cost of downtime across businesses of all sizes and all industries is around $5,600 per minute.

When a customer selects a company, they need to trust that they are working with a professional and capable organization. Not being able to access a company’s website or employees telling customers they cannot help them at the moment they called does not ensure shopper confidence. This damage can be irreversible to a business’s reputation.

Given that the consequences of downtime are so costly, it’s easy to understand why achieving near-perfect uptime is so important. In order to completely avoid all of the costs and consequences associated with downtime, businesses need to be aiming for uptime of at least 99.99%. While these consequences may seem a bit disheartening, the good news is that there are ways to avoid them. Get connected to our data center and solve your issue.

Protected Harbor helps businesses across the US address their current IT needs by building custom, cost-effective solutions.  Our unique technology experts evaluate current systems, determine the needs then design cost-effective solutions. On average, we are able to save clients up to 30% on IT costs while increasing their security, productivity and durability.  We work with many internal IT departments, freeing them up to concentrate on daily workloads while we do the heavy lifting.  www.protectedharbor.com

Keep Your Business Running – Prepare for The Worst

Keep Your Business Running – Prepare for The Worst

Since the face of how we do business has changed because of COVID-19, businesses should think about (and hopefully prepare for) cyberattacks and security breaches. Having a disaster recovery plan in place to restore critical information is a good place to start.  However, in these times this is simply not enough.

This is why it’s important to have a 360 business continuity plan ready.

Here are some devastating facts from Bureau of Labor, PC Mag, Gartner, Unitrends and TechRadar:

  • Every year, 20% of businesses experience system loss from events such as fires, floods, outages, etc. These types of occurrences not only result in loss of data, but they displace employees and shatter operations
  • 60% of companies that lose their data will shut down within 6 months
  • Only 35% of Small Businesses have a comprehensive disaster recovery plan in place, according to Gartner
  • The cost of losing critical applications has been estimated by experts at more than $5,000 per minute
  • Network downtime costs 80% of small and medium businesses at least $20,000 per hour

If these facts are not enough to ensure a business continuity plan is in place, then you are rolling the dice in a game you will not win.  It is not a matter IF something will happen its WHEN.

A business continuity plan creates a means of keeping your business operational during a crisis. In addition, the plan should include protocols for your devices, communication channels, office setup – including employees, and more.

If you’re currently experiencing unexplained system slow-downs, unexplained outages and trying to maintain normal computer functions, then your system needs definite attention, and you probably don’t have a continuity plan in place. It’s not too late to start, but understand it’s going to take some time and effort but the end result will be invaluable.

This is where Protected Harbor can help. We deliver end to end IT solutions ranging from custom designed systems, data center management, disaster recovery, ransomware protection, cloud services and more.  On average, we save clients up to 30% on IT costs while increasing their productivity, durability and sustainability.  Let our unique technology experts evaluate your current systems and design cost-effective, secure options.

With us, you can be sure your systems will run during a crisis.  Contact us today to find out more.  www.protectedharbor.com

Disadvantages of AWS, Azure, and Other Big Brand Hosting

Disadvantages of AWS, Azure, and Other Big Brand Hosting

When it comes to hosting for a business, you don’t want to use just anyone. There are many critical factors to consider including security, stability, uptime, scalability, and more. Because of this, many businesses gravitate towards big, established brands for hosting and management such as Amazon’s AWS or Microsoft’s Azure.

Companies like these can likely provide well beyond your technical needs. That’s not to say they’re all the exact same. Azure caters to Microsoft products, allowing large companies to move their Windows-based infrastructure online more easily. Meanwhile, AWS boasts their general flexibility and universal capabilities.

Each brand has its unique strengths. When it comes to weaknesses, however, there are some overlapping issues that basically any large-scale hosting company deals with.

Overwhelming Options

Right from the start, many businesses are overwhelmed with the variety of packages and services offered by large hosting companies. AWS, for example, greets you with an entire library of services and products to choose from. Simply trying to find basic website hosting proves to be difficult and confusing.

Unclear Pricing Structures

Equally confusing are the pricing structures. Many companies try to sign you up on free trials or temporary discount pricing, only for you to discover the true inflated price months down the road.

These companies also tend to work off a pay-per-use model. In other words, the more data you process and store, the more your hosting costs. While it sounds nice in theory, as you only pay for what you use, it can make it very difficult to predict your monthly costs as prices fluctuate.

It also leaves you severely exposed to DDoS attacks.

DDoS attacks infect a large number of devices with malware and then use them to unleash a coordinated flood of traffic on an unsuspecting network. In addition to slowing down and (likely) crashing your systems, it results in a massage spike of data use.

The average size of a DDoS attack is 2.5 gigabits per second. If you’re being charged per data used, you’ll be left with a very large hosting bill following a DDoS attack.

Advanced Knowledge

Once you’ve figured out what your business needs, the real difficultly begins. Within any given service, there are countless add-ons, tools, settings, and more. While this provides a lot of flexibility and customization, it requires a lot of work and understanding. The deeper your needs go, the deeper your understanding needs to be.

Each platform is different, which means you either need to hire someone who is experienced on a particular platform, or you’ll need to invest in training a current employee. The question is, do you want someone learning a new platform as they’re managing your IT needs?

Support Problems

Platforms like AWS and Azure do offer their own technical support, should you require it. In fact, they often provide a certain amount of free support when you sign up. However, those hours can quickly be eaten up during the onboarding. After that, you’ll pay for support.

Things can get very expensive very quickly.

A better solution is generally to find a third party to help manage and maintain your hosting needs. This can provide more affordable support, but it also adds complexity to your hosting management and expenses.

A Simpler, Yet Powerful Hosting Solution

At Expedient Technology Services, we provide straightforward, yet diverse hosting solutions for businesses of all sizes. Whether you’re a start-up or an enterprise, we have the capabilities to meet your specific needs.

We operate under flat rate costs, so you know exactly what you’ll pay every month. We can even bundle in support hours so that you get professional assistance when you need it. As your company grows, we can easily scale our services with you.

While our initial costs may seem higher, they’re generally cheaper in the long run. Best of all, they’re much less stressful to understand and manage. After all, ETS exists to provide Stress-Free IT. For hosting, computer services, and technical support in Dayton, Ohio, and beyond, contact ETS today.

What Does the Average Company Pay for Downtime?

What Does Downtime Cost the Average Business banner

 

What Does Downtime Cost the Average Business?

 

One bad experience is all it takes to rattle a business owner. Infrastructure matters and when your system or applications crash. It can have an enormous impact on your bottom line not to mention your business operations.  Monetary and data losses from unexpected crashes can even, in some cases cause a company to close its doors permanently.

According to an ITIC study this year, the average amount of a single hour of downtime is $100,000 or more.  Since 2008, ITIC has sent out independent surveys that measure downtime costs. Findings included that a single hour of downtime has risen by 25%-30%.  Here are some staggering results:

  • 98% of organizations say a single hour of downtime costs over $100,000
  • 81% of respondents indicated that 60 minutes of downtime costs their businesses over $300,000
  • 33% of those enterprises reported that one hour of downtime costs their companies between $1-5 million

The only way to mitigate risk is to be proactive by having the right technology in place to monitor, prevent, and when an attack happens (and it’s not IF but WHEN), having the right company on hand to restore, rebuild and restart. Once you understand the real-life costs of downtime it should not be hard to take proactive measures to protect your business.

Protected Harbor has a full team of technical experts and resources to maintain your system’s well-being and ensure business continuity. Contact us today for a full assessment of your applications and infrastructure.  www.protectedharbor.com