“If you think you know-it-all about cybersecurity, this discipline was probably ill-explained to you.”
Stephane Nappo
In my old blog, I used to take some time and write about the latest breaches, exploits, and vulnerabilities that have been seen out in the wild. It wasn’t because I wanted to be another voice out in the world talking about all the security issues being found. It was more so that I could stay up to date and be educated on the latest happenings in the cyber security world (do we still call it cyber security?). I’ve spent a lot of time on Ai lately but I want to get back to what I know best and that is security. So here are some of the latest going on in security today:
MOVEit
Progress Software’s MOVEit Transfer application has been found to have multiple security vulnerabilities. Personally, I have never heard of this application but a lot of government and Fortune 500 companies use it to transfer files securely internally and externally. Unfortunately, in May it was found to have a SQL injection flaw that when abused can allow an attacker to upload files, download files, and take control of the affected system. The vulnerabilities disclosed in June were a Zeroday as no mitigation existed to stop the issue. Compounding the issue (CVE-2023-34362), two more vulnerabilities were discovered that could allow an attacker to steal data from the affected system. HorizonAi provided a simple POC here if you want to play around with it. It’s a great POC as it’s fully commented in Python on how the attack works. What makes this attack particularly bad is its widespread use and that data gained from the attack is considered sensitive being a “secure” file transfer application. It’s a bad look for Progress Software since the product is marketed as “Secure File Transfer and Automation Software for the Enterprise”. With 3 SQL injection vulnerabilities found it makes me wonder if any pen testing was done on their own software. The SQL injection vulnerabilities aren’t overly difficult to execute and more of them continue to be found. Already, local governments in the united states are warning of data breaches from this attack. I wish all the best to the security team over there and I hope it doesn’t get worse.
Barracuda Email Security Appliance
This gem of a CVE I have personal experience with. While I have not used Barracuda Email Security Appliances (ESG), the parent company I worked for did. Last fall the company I worked for started seeing an absolute deluge of email traffic from Barracuda ESG appliances. It amounted to us as a DDOS attack against our website on top of a large increase in phishing to our employees. We thought the root cause was a bounce or reflection of our Sendgrid marketing emails back to us and that the phishing increase was a separate issue. In the end, we blocked that traffic with the help of our bot mitigation company and went on with our lives. It turns out that the traffic we were seeing was compromised ESG appliances. Now, I am not going to do a major write-up of how this attack worked as Manidant already has a phenomenal write-up here. What made this attack particularly bad was something we deal with a lot in information security: persistence.
Once the attacker saw that Barracuda was trying to solve the flaw they kicked into overdrive. Their first attempt at persistence was setting up cron jobs that enabled a reverse shell and ran hourly. Later attempts modified the Perl update script built into the appliance to execute code. Finally, to top it all off they deployed a kernel rootkit that would be run at boot time. The persistence is so bad, both Barracuda and Mandiant recommend that customers replace their entire hardware (oof)! The attacker is most likely from China as Mandiant discovered that during exfiltration the attacker was mainly looking for specific emails from East Asian academics and government officials. It’s not every day that a system is so seriously infected that the entire system needs to be replaced. I wish the security team over at Barracuda all the best.
Microsoft Office DDos Outage
While DDosing a site isn’t hacking or a vulnerability it is annoying. What makes this attack interesting is that it happened to such a large company and more specifically a product that has defenses in place to mitigate such an attack. This report hit late last night and can be read here. Basically, a group from Sudan (not verified) launched a Layer 7 (network layer) DDos attack against Microsoft’s cloud services. Microsoft didn’t provide much data on how much traffic hit them but they did say it involved different methods of overloading their cloud resources. They did say however that the attack used, “rented cloud resources, botnets, proxies, and VPNs” to attack them. So, whoever this attacker is they are coordinated in some way to deploy so many resources to hit a complex target like Azure. Microsoft was obviously able to mitigate the attack and make some changes to its firewalls in case of future attacks. I do hope Microsoft releases more information in the future. It was probably a rough day for the network engineers and security engineers over at Microsoft. I hope they get some rest!
Conclusion
There have been multiple vulnerabilities and disclosures this month but I wanted to just focus on the big ones. I’ll continue writing these once or twice a month depending on the security landscape and my time. I know in previous posts I went more in-depth on how some of these attacks worked. In future posts I will dive deeper, I just need to get used to writing again. Until next time and stay safe out there!
“The future of AI is bright, and it will continue to revolutionize the way we live and work. With advancements in machine learning and natural language processing, AI will become even more powerful and ubiquitous in the coming years.”
-GPT4All
As much as I harp on the current hype surrounding AI and its pace of advancements, I do believe there is a place and use for these new tools. While we aren’t going to be replaced overnight, I do expect a big productivity boom from all these tools. One of the most significant drawbacks to the current Large Language Model AI (LLM AI) is that it is controlled by someone else. OpenAI has its popular ChatGPT, Facebook has its Meta AI, Google its Bard, and Microsoft has Bing AI. All of these companies have to make money, so whether through a subscription or selling your data, there is a cost. They are also black boxes in how most of them work. If we want to live in a world where everyone is on equal footing with AI, people need to be able to run them locally. In this post, I will share how you can run your own ChatGPT at home.
When researching this article, I was surprised to find out how many choices there are if you want to run your own LLM AI from home. One of the easiest and quickest choices to getting up and running is from the folks over at GPT4All. They offer a one-click installer to download a Chatgpt like AI onto your Windows, Linux, or Apple Mac computer. Once downloaded, pick the Lanauge Model you want to work with, and presto! I had zero issues setting this AI up and was asking it questions in minutes. Despite its simplistic interface, its settings menu has a lot of knobs you can play with your tweak your output. It also has one of the largest selection of language models to choose from. While it doesn’t require a GPU, it can tax your CPU, and responses are a little slower depending on what you are running it on. Overall, it is the easiest experience to set up and doesn’t require any real technical knowledge. The community support is great if you run into any issues as well.
Based on the fantastic work of Standford’s Alpaca Project
Fast and customizable
Huge community support
Does require a bit of technical knowledge
Let’s say you are like me and want to get down in the dirt and really PLAY with an LLM AI. However, you don’t want to make a career out of it, and you get to go home at the end of the day. My suggestion to you would be Alpaca-LoRA. Alpaca low-rank adaptation or Alpaca-LoRA is a LLM AI forked from the University of Standford Alpaca Project. Standford set out to make an LLM AI that fixed some of the deficiencies with ChatGPT, like generating false information and toxic language. Stanford released assets to the open-source community, which then created Alpaca-LoRA. Once you have cloned the repo and installed the requirements in Python it’s pretty straightforward to get up and running. I found it to be more descriptive and better able to handle programming challenges than GPT4All. The downside was that everything was handled via the Python Console instead of a nice interface like GPT4All. Not ideal, but this is where the amazing community support comes in. The GitHub repo has a resource section for all the projects Alpaca-LoRA has spawned. If you want a ChatGPT-style interface, someone has created that. Maybe you need Alpaca-LoRA in Spanish? Someone has done that too. The open-source community has embraced Alpaca-LoRA to the point that a leaked memo from Google states that they are falling behind the open-source community. This is the model I ended up going with at home. If you don’t mind getting your hands dirty, this is the model to pick. It’s not as easy to set up as GPT4All, but it has many more features.
Not an LLM but rather a large site hosting thousands of models
Models large and small are available
Try before you download feature
It can be a bit overwhelming
For this last one, I had difficulty narrowing it down to a specific model or program. Instead of just picking one, I’ll let you decide. HuggingFace is the place to go if you want to learn or play with machine learning. Most of the LLM Ai’s you can play with today started on this site. You can find everything here, from conversational, image to text, text to video, object detection, and more. The best part is that most projects allow you to play with them before downloading anything. If you are looking for the best LLM Ai’s, I suggest starting here. Huggingface isn’t just for grabbing the latest and greatest; it’s also a great place to learn about machine learning and language models. I often picked up on what is happening in Ai just browsing the site. It can be a bit overwhelming browsing the site, but there is no better place to discover new Ai models.
Conclusion
I hope you found this helpful; I learned much from researching this article. I honestly hope that the open-source community continues pushing the boundaries of Ai. I would much rather have a future where everyone can access these models than those who can afford them or are locked away in some company’s data center. Until next time!
“The Linux philosophy is ‘Laugh in the face of danger’. Oops. Wrong One. ‘Do it yourself’. Yes, that’s it.”
Linus Torvalds – Creator of the Linux Kernel
Is Windows the Answer?
MicrosoftWindows has been a part of my entire life. I grew up with it at home, at school, and later on at work. When I reached the end of high school, I had a life goal: to work for Microsoft. The only time I used Linux was when I needed to bypass security controls on our home computer so that I could game when I was supposed to be doing my homework. However, since the release of Windows 8 and the interface changes Microsoft continues to push; I decided to change my daily driver to Linux. As you will find out, it wasn’t that simple.
Why?
Since Windows 8, Microsoft has been pushing an update to its interface and dumping any interface that still looks like it was built in Windows 98. While some of these changes have been great, many have been terrible. The list is pretty long, but here are the top things that have pushed me over the edge:
Windows 11 has decided to hide context menus. If you right-click a file, you must click more options to see what you want. (Whoever thought this was a good idea….Shame)
Windows 10 and 11 are trying to do away with Metro UI from Windows 8. However, there are still Metro UI elements in Windows 11, on top of the new UI from Windows 10. Hell, there are still UI elements from Windows 98.
The endless push to get you to sign in with a Microsoft Account instead of a local account
Targeted Ads – Tracking telemetry
Ads in the start menu
The amount of bloat being shipped in standard Windows installs.
General lack of cohesion
Forcing Windows Server to use the same UI as consumer Windows.
I’ve stayed with Windows mostly because it’s still one of the most used operating systems in the world and its gaming credentials. While I use Linux more and more at work, most of what I do at home is on Linux. I love to game on my PC, and for the longest time, Windows was the only way to game on a PC. That changed recently with the release of the Valve Steam Deck. The Steam Deck runs Linux with a compatibility layer called Proton that allows you to play Windows games on Linux easily. Proton isn’t new. It’s a supercharged version of the compatibility tool called Wine. I’ve used Wine in the past, and while some things worked well, it was always a bit janky and didn’t always work. After getting my Steam Deck, I realized that times have changed, and maybe it was time to give Linux another shot.
Arch Linux
Setup
I have two computers at home. A gaming computer I built myself and a laptop I use for gaming and work. Since I know I will need at least one computer with Windows, I decided to trial-run Linux on my laptop. This was my first mistake, as some laptops are better suited to Linx than others; I’ll get to that in a minute. After deciding to use my laptop, it was time to pick a distro. In the past, I have usually stuck with Debian-based distros like Ubuntu or Mint, but I wanted to try something fresh. When it comes to Linux, they usually come in two different flavors, Point Releases (LTS) and Rolling Releases. Point Releases or Long Term Support releases are usually distros like Ubuntu or Fedora that release big updates and drivers once or twice a year. Point Releases have been the gold standard since Linux was made, but in the last few years that has changed. Rolling Releases are distros that update as soon as a driver or update is released. They are usually cutting-edge and have all the latest and greatest features. Arch Linux is one of those distros and has been growing in popularity over the last six years to the point it is one of the most popular distros around. I tried it a couple of times in 2015 and struggled with it. However, I wanted to try it again because most users who game on Linux swear by it. Instead of installing true Arch Linux, I decided to go with a distro called Manjaro. It’s a more user-friendly Arch Linux and has a lot of built-in scripts to get Steam up and running for gaming. I will be installing it on the following:
Asus G15 Laptop 3070ti AMD 5900HS 32GB Ram
Logitech MX Master Mouse
Installation
Manjaro Iinux Running
Unlike the command line installer that comes with Arch Linux, Manjaro comes with a simple-to-use interface to get everything set up. It was no different than setting up Ubuntu. After installing, I was greeted with a nice desktop interface. That was when the trouble began. While everything worked, my Bluetooth mouse did not. I have a Logitech MX master mouse which I love. For whatever reason, it would not show up in the Bluetooth menu. Per the Arch documentation, it should just work, but it just wouldn’t. Looking around on Reddit and Manjaro forums, I found this thread about installing different Bluetooth managers. At this point, we went off the rails. By testing some of these out, I destroyed the package manager and could not install any packages. At this point, I spent about 2 hours trying to get my mouse working and was incredibly frustrated. I had seen a post earlier that said Manjaro wasn’t a true version of Arch Linux with all the under-the-hood changes they made. I decided to try Arch and see if I would have better luck.
He did not have better luck.
Narrator
Arch Linux comes with nothing. It’s a minimalist Linux system and doesn’t come with anything. It gives you enough tools to get up and running; the rest is up to you. I installed a GUI, got the OS up to date, and got display drivers running. Arch doesn’t come with Bluetooth support. You have to install the Bluetooth stack. There are many versions you can pick from, but I went with the default utility package. This is where I ran into almost the same problem. The mouse would pair this time but wouldn’t control the screen. I spent another hour on this before I closed my laptop and just walked away. The next day after doing some research, I found some very interesting things:
Some laptops are more Linux-compatible than others. Gaming Laptops have a lot of custom firmware to control the fans, RGB lighting, and other system resources. This software is usually written for Windows and has no Linux-provided support.
Without knowing it when I started, I had picked hard mode to get Linux installed on my laptop. During my late-night search, I stumbled on the folks over at asus-linux.org. This team of developers has been working on getting Asus Laptops working on Arch Linux and Fedora. Their guide specifically calls out not to install Manjaro on your laptop due to multiple compatibility issues. While they have a very straightforward guide to Arch Linux, the guide that caught my eye was the one for Fedora. Fedora has been around a long time, and while it may not be bleeding edge, it does try to be a middle ground between Arch and Ubuntu. I have used it before, and I am a lot more comfortable with it than Arch.
Fedora Running Gnome
Installing Fedora 37 is very straightforward. I had zero issues getting everything up and running. While I have no love for the Gnome interface and its touch-centric design, unlike Windows, I can change it to whatever I want. Bluetooth worked without issues, my mouse paired, and all the hotkeys worked. The Fedora guide was straightforward, and getting the Nvidia drivers to work was a breeze. My only issue was that booting from a hibernated state can take about 1 minute to boot. This issue concerns the Sabrent NVMe drives; developers say it will be fixed. Before I get into my day-to-day driving of Fedora, I need to take a minute and call out Nvidia.
Nvidia
Unlike Intel and AMD, Nvidia does not open-source its drivers for Linux. They do provide a blob that you can run, but in almost all distro’s you need to do special changes under the hood to get them to work without breaking your whole system. The open-source equivalent of this is a package called Nouveau. The developers for this package, with little to no support from Nvidia have been hacking and patching support on Linux. It works but it’s never been great. If I had gotten a laptop with Intel CPU/AMD GPU or AMD CPU/AMD GPU I would have had little to no issues running in Linux. While Nvidia has stated they will partially open-source their driver for Linux, the progress has been very slow. If you plan on moving to Linux to game in the future, just be aware that Linux gets treated like crap compared to Windows. I hope that changes in the future, and frankly, I am disappointed.
Trial Run
Broadly speaking, running Fedora on my laptop daily has been a breeze. I enjoy seeing daily updates to the kernel and being able to tweak performance at will. Steam and its Proton compatibility work amazingly well. Some games do better than others, but for the most part, I only had a few issues here and there playing games. One of the only major issues is that most Anti-Cheat software doesn’t support Linux. Because of this, most online games don’t work. With older games, like Total War: Rome II, the game would have issues seeing the correct amount of VRAM on my GPU. None of these issues were game-breaking, and I could game without issue. Emulation also worked well, and playing my Nintendo Switch and DS games via emulation was a breeze. While the team over at Asus-linux.org have done a great job of providing 1 to 1 tooling from Windows, it’s not perfect. The tool they use to update RGB doesn’t always work, and despite being able to control the fans, the laptop did run a little hotter than it did on Windows. Overall, when gaming, I only lost 5 to 10 frames per second against Windows. In most games, that wasn’t very noticeable, but in more modern games where every frame mattered, it could be annoying.
Enabling Proton
In terms of productivity, I didn’t have many issues here either. I found tools that would have replaced what I used in Windows. Email was a little bit of a hassle. I use multiple Office365 accounts spread over multiple domains. I have used Thunderbird Email Manager in the past, and while it’s usable, it’s not Outlook. I ended up having to pay a third party to get authentication to work in Thunderbird with Office365. Libre Office is a great 1 to 1 replacement for Microsoft Office. I spend most of my productivity tasks on the web, so using Firefox and Chrome is no different than on Windows. I did have some issues with the Nvidia driver where the laptop would come back from sleep, but the display driver would not. There were lots of complaints about this online about this, and a simple crontab hack was able to fix it. In general, Fedora consumed far fewer resources at boot, and I didn’t have to worry about bloat or Fedora selling my data. One issue I did have was a tool called Remote.it. I use this to connect to my crypto mining warehouse in Montana. I unfortunately have to use this tool because the service provider, StarLink uses Carrier Grade Nat (CGNAT) for its service. CGNAT is used by smaller providers who can’t get ahold of a large enough pool of IPV4 addresses (There is a shortage). There is a great write-up here, but to make it simple, if you use StarLink you will be double NAT and have no way to port forward. Remote.it is a service that allows you to tunnel around those limitations. Unfortunately, they don’t provide an installer for Fedora. My workaround for this was to install VirtualBox and run…..Windows. It was annoying to have to install Windows for one application, but it also solved my email issues. My other issue was that my laptop was a 4k display. While I usually set it at 2k, Linux doesn’t have support for HDR and window scaling. There were a couple of workarounds to get scaling correct, but Linux has a long way to go to support HDR (so does Windows in that aspect).
Notes For The Future
I installed Fedora back in December of 2022. Compared to how things were five years ago, I can already see a future where I no longer use Windows in my day-to-day life. Last month, I purchased a second NVMe 1 TB drive for my laptop as it had a port available. I ended up installing Windows on one drive and Fedora on the other. I spend most of my time in Linux, and I switch to Windows if I need to use a Windows Native application or I want to play a more modern demanding game. If I could go back to December 2022 and give myself some tips, I would probably have said the following:
Buy an INTEL CPU/AMD GPU laptop or an AMD CPU/AMD GPU laptop. Dealing with Nvidia is a pain in the ass.
Rolling releases have tremendous support, but you are beta-testing the software.
Make sure any future laptop you use has basic Linux support. Many laptops these days have special hardware that only works on Windows.
Check to make sure every program you use day to day runs on Linux.
I am pleasantly pleased with how far Linux has come. It still requires that tweaking that it’s so well known for, but if you stick with the mainstream Linux distros, it almost “just works.” Even if I didn’t have an ASUS laptop and I went with installing Linux on my desktop, I think I would have ended up on Fedora. It is such a solid operating system (OS), and even Linus Torvalds, the creator of Linux, uses it as his day-to-day system. If you want to make the switch, I honestly can’t recommend a better OS. Last but not least, if could make some recommendations to Microsoft, I would state the following:
You don’t have to be like Apple. Sure, they are riding high, but all great empires fall. Return to the Windows 7 interface and change everything to match that interface. Upgrade the internals to match Windows 11 (Direct Storage, DirectX Support, built-in Linux, etc.).
If you don’t want to settle on the Windows 7 interface, then stay set on the Windows 10 interface and clear out all the old design elements.
If you want to support handheld or touch devices, let the user choose what interface they want to use at installation. Trying to make an operating system that supports all devices is impossible. Gnome did the same thing with their UI, and it’s almost universally hated.
Focus a little more on gamers. I know they aren’t a big subset of your users, but you will lose them if Linux and Proton continue on their current path. Performance is everything.
I will continue to use Windows, and I am sure Windows 11 will get itself sorted out by Windows 12. In the meantime, I will keep using Fedora and enjoy the experience. For now, Windows is still installed, but if things continue, I will probably drop it entirely in the future.
Authors Note: After writing this post, I stumbled upon the Atas OS project. The idea behind this project is to remove all the bloat from Windows. It was designed to be used on older hardware, but it has already been shown to speed up gaming FPS on modern systems. All it requires is Windows 10. It does have a long list of drawbacks, but if you want to dual-boot a normal Windows OS and a gaming Windows OS this is probably the way to do it.
The Internet is becoming the town square for the global village of tomorrow.
Bill Gates
Right now, you are on a website hosted in the Cloud. Specifically, this website is hosted on Amazon’s AWS platform. There is a high probability that you were using an app on your phone hosted on Google Cloud or browsing a website running services from Microsoft Azure. Almost everything you do online is hosted in the “cloud.” Is that a good thing, and how did the consuming Cloud take over the internet?
The Cloud
The word Cloud gets thrown around a lot and is interchangeable in many ways. The Cloud comes down to this: The Cloud is someone else’s infrastructure you are using. Before the Cloud and even modern data centers, you had to purchase the hardware and run it yourself if you wanted to put something on the internet. If the application you wanted to run was business-critical, this would require a lot of redundant hardware and thus would be expensive. Not only was it costly, but it was also time-consuming to set up and manage. If you didn’t provision your hardware correctly and the company suddenly experienced a surge of users, there wasn’t much you could do until more hardware could be purchased and brought online. The answer to this and the precursor to the Cloud was co-location. Instead of running your own data center, you could take your hardware and run it in someone’s data center. Co-location took the management out of managing a data center. Companies no longer had to construct a location and hire employees to monitor their hardware.
Now, if a company needs a server fixed or more capacity for their applications, they need to fill out a ticket with their hosting company, and the hoster gets it done in an hour or two. In most cases, companies didn’t even need to purchase hardware as they could lease whatever was required from the hosting company. It wasn’t perfect as there was usually a lag between sending a ticket in to troubleshoot something and that something getting fixed. There were also certain levels of service a colo could provide. The more you paid, the faster the service you received. These service level agreements and muti-tenant data centers popped up all over the world. This structure worked from the 90s to the early 2000s.
Marketing and NASA
In 2002 Amazon started a subsidiary called Amazon Web Services. Shortly after, they released a service called S3 or Simple Storage Service. S3 underpins a staggering amount of the internet but simply put it is a file hosting service. Shortly after, they released a service called EC2 or Elastic Cloud Compute, which allows anyone to click a button and spin up a virtual server in an Amazon data center. This virtual server isn’t new technology; being able to emulate multiple smaller computers inside a larger one has been around since the late 1960s. The difference was the software, mainly the web interface Amazon created to spin up servers. Companies and developers now could instantly spin up infrastructure in minutes. You could programmatically add more servers if your website suddenly experienced more load.
Generated using AI
Cloud computing kicked into high gear when NASA and Rackspace created Nebula. Nebula was a federal government cloud computing program designed to run government projects in a private cloud. It would later go on to become Openstack. I will swing back around to Openstack, but it allows anyone to create their own personal/public Cloud using their hardware. By 2010, Rackspace and OVH had gone from hosting providers to cloud-provider businesses. Today almost everyone interacts with the Cloud. Most apps and software now run natively in the Cloud or across multiple cloud environments. Cloud computing has enabled minor developers to the most prominent companies to deploy the infrastructure required to run their apps quickly. Some cloud environments are even branching out beyond computing. Amazon recently released Ground Station, which allows you to control satellite communications to and from your orbiting satellite. Despite all these benefits, as the major cloud computing companies continue to grow, the internet becomes more decentralized. This leads to some significant national security risks.
Centralization
It happens suddenly. You are browsing Facebook and the page won’t load. Your internet connection is fine, so maybe the site is just down. So you head over to your favorite site about gaming and find that it is down. Checking Twitter shows that multiple sites are down due to an outage in one of the major cloud providers. It’s straightforward to think that because your website is hosted in the Cloud on redundant machines, it’s almost immune to all outages. Just like any piece of technology, things break. Data centers have hardware failures, fiber lines get cut, tornados cut power, and earthquakes knock buildings off their foundations. Cloud providers are not immune to these things. Redundancy is not a guarantee when hosting your stuff in the Cloud. Amazon Web Services even points out in their onboarding documentation that if you host all your services in one region, your services are not redundant. (This applies to most major cloud providers.) The simple solution would be to spin up a secondary environment in a different region, right? Sure, but that means you just doubled the costs of running your services. Cloud computing has undoubtedly lowered the cost hurdle, but it can get expensive quickly if you don’t manage costs. As an engineer, I have seen multiple bills from AWS exceeding $1 million a month.
Patrick Hertzog via Getty images – OVH Data Center Fire
Despite this, the ease of use has allowed the big three (Microsoft, Amazon, Google) to absorb many popular websites and applications in the United States and Europe. This has also allowed them to buy out many of the smaller data centers across the country. This centralization of the internet into a handful of cloud computing companies has become an Achilles heel.
Pressure Point
My job and what I do is informational and infrastructure security. Being a security engineer sometimes bleeds into my personal life, and when I look at certain things, I look at them from a security standpoint. Where are its weak points, how can I meditate risk, and how would I break in? When I look at the growth in cloud computing and the number of businesses that rely on them, it scares me. So much implicit trust from POS vendors, wireless vendors, credit card companies, hospitals, and banks that the Cloud will always work. That the Cloud is secure. I am telling you it’s not. You can have the best cloud architect set up the most secure, reliable website on AWS or Azure, but all it takes is for one employee at either of those companies to get popped, and it’s game over. All it takes is one bug in code or a misconfigured edge firewall in Google or Amazon, and it’s over. The difference before was if a hacker got into your data center or a natural disaster took it out, it just affects your business. If any of these large companies get taken out, hundreds if not thousands of businesses get taken offline.
The Northeast Blackout of 2003
It’s not just the digital bugs we should be worried about but the physical ones as well. As we have seen with the Russian Invasion of Ukraine, infrastructure is fair game. I won’t get too much into the weeds on the need for more protection of US public infrastructure, but I will add private infrastructure needs protection as well. Take out a couple of major data centers in the United States, and you will damage its service-based economy. So much of what we do day to day is spent online. Most of the applications I pay for are hosted online in the cloud. Knock enough of them out and it all falls apart very quickly.
Decentralization
I have preached that decentralization is excellent when it makes sense. In this case, I think it fits perfectly. Organizations like OpenStack are a great place to start. More companies should have their own Hybrid Private Cloud, where data is hosted both privately and in a public cloud. Some crypto-related projects even want to network hardware from across the globe into one giant global cloud network. While I love the ease of use that comes with the Cloud, I do believe in the saying that putting all your eggs in one basket is a bad idea. I would be willing to bet that we will see a significant outage across one of the larger cloud providers in the next ten years. That outage may help businesses understand that sometimes running some of their own infrastructures is the way to go. I certainly don’t want something terrible to happen to anyone’s livelihood, but if something were to happen, I would rather not see a third of the internet go dark.