“If you think you know-it-all about cybersecurity, this discipline was probably ill-explained to you.”
Stephane Nappo
In my old blog, I used to take some time and write about the latest breaches, exploits, and vulnerabilities that have been seen out in the wild. It wasn’t because I wanted to be another voice out in the world talking about all the security issues being found. It was more so that I could stay up to date and be educated on the latest happenings in the cyber security world (do we still call it cyber security?). I’ve spent a lot of time on Ai lately but I want to get back to what I know best and that is security. So here are some of the latest going on in security today:
MOVEit
Progress Software’s MOVEit Transfer application has been found to have multiple security vulnerabilities. Personally, I have never heard of this application but a lot of government and Fortune 500 companies use it to transfer files securely internally and externally. Unfortunately, in May it was found to have a SQL injection flaw that when abused can allow an attacker to upload files, download files, and take control of the affected system. The vulnerabilities disclosed in June were a Zeroday as no mitigation existed to stop the issue. Compounding the issue (CVE-2023-34362), two more vulnerabilities were discovered that could allow an attacker to steal data from the affected system. HorizonAi provided a simple POC here if you want to play around with it. It’s a great POC as it’s fully commented in Python on how the attack works. What makes this attack particularly bad is its widespread use and that data gained from the attack is considered sensitive being a “secure” file transfer application. It’s a bad look for Progress Software since the product is marketed as “Secure File Transfer and Automation Software for the Enterprise”. With 3 SQL injection vulnerabilities found it makes me wonder if any pen testing was done on their own software. The SQL injection vulnerabilities aren’t overly difficult to execute and more of them continue to be found. Already, local governments in the united states are warning of data breaches from this attack. I wish all the best to the security team over there and I hope it doesn’t get worse.
Barracuda Email Security Appliance
This gem of a CVE I have personal experience with. While I have not used Barracuda Email Security Appliances (ESG), the parent company I worked for did. Last fall the company I worked for started seeing an absolute deluge of email traffic from Barracuda ESG appliances. It amounted to us as a DDOS attack against our website on top of a large increase in phishing to our employees. We thought the root cause was a bounce or reflection of our Sendgrid marketing emails back to us and that the phishing increase was a separate issue. In the end, we blocked that traffic with the help of our bot mitigation company and went on with our lives. It turns out that the traffic we were seeing was compromised ESG appliances. Now, I am not going to do a major write-up of how this attack worked as Manidant already has a phenomenal write-up here. What made this attack particularly bad was something we deal with a lot in information security: persistence.
Once the attacker saw that Barracuda was trying to solve the flaw they kicked into overdrive. Their first attempt at persistence was setting up cron jobs that enabled a reverse shell and ran hourly. Later attempts modified the Perl update script built into the appliance to execute code. Finally, to top it all off they deployed a kernel rootkit that would be run at boot time. The persistence is so bad, both Barracuda and Mandiant recommend that customers replace their entire hardware (oof)! The attacker is most likely from China as Mandiant discovered that during exfiltration the attacker was mainly looking for specific emails from East Asian academics and government officials. It’s not every day that a system is so seriously infected that the entire system needs to be replaced. I wish the security team over at Barracuda all the best.
Microsoft Office DDos Outage
While DDosing a site isn’t hacking or a vulnerability it is annoying. What makes this attack interesting is that it happened to such a large company and more specifically a product that has defenses in place to mitigate such an attack. This report hit late last night and can be read here. Basically, a group from Sudan (not verified) launched a Layer 7 (network layer) DDos attack against Microsoft’s cloud services. Microsoft didn’t provide much data on how much traffic hit them but they did say it involved different methods of overloading their cloud resources. They did say however that the attack used, “rented cloud resources, botnets, proxies, and VPNs” to attack them. So, whoever this attacker is they are coordinated in some way to deploy so many resources to hit a complex target like Azure. Microsoft was obviously able to mitigate the attack and make some changes to its firewalls in case of future attacks. I do hope Microsoft releases more information in the future. It was probably a rough day for the network engineers and security engineers over at Microsoft. I hope they get some rest!
Conclusion
There have been multiple vulnerabilities and disclosures this month but I wanted to just focus on the big ones. I’ll continue writing these once or twice a month depending on the security landscape and my time. I know in previous posts I went more in-depth on how some of these attacks worked. In future posts I will dive deeper, I just need to get used to writing again. Until next time and stay safe out there!
“The future of AI is bright, and it will continue to revolutionize the way we live and work. With advancements in machine learning and natural language processing, AI will become even more powerful and ubiquitous in the coming years.”
-GPT4All
As much as I harp on the current hype surrounding AI and its pace of advancements, I do believe there is a place and use for these new tools. While we aren’t going to be replaced overnight, I do expect a big productivity boom from all these tools. One of the most significant drawbacks to the current Large Language Model AI (LLM AI) is that it is controlled by someone else. OpenAI has its popular ChatGPT, Facebook has its Meta AI, Google its Bard, and Microsoft has Bing AI. All of these companies have to make money, so whether through a subscription or selling your data, there is a cost. They are also black boxes in how most of them work. If we want to live in a world where everyone is on equal footing with AI, people need to be able to run them locally. In this post, I will share how you can run your own ChatGPT at home.
When researching this article, I was surprised to find out how many choices there are if you want to run your own LLM AI from home. One of the easiest and quickest choices to getting up and running is from the folks over at GPT4All. They offer a one-click installer to download a Chatgpt like AI onto your Windows, Linux, or Apple Mac computer. Once downloaded, pick the Lanauge Model you want to work with, and presto! I had zero issues setting this AI up and was asking it questions in minutes. Despite its simplistic interface, its settings menu has a lot of knobs you can play with your tweak your output. It also has one of the largest selection of language models to choose from. While it doesn’t require a GPU, it can tax your CPU, and responses are a little slower depending on what you are running it on. Overall, it is the easiest experience to set up and doesn’t require any real technical knowledge. The community support is great if you run into any issues as well.
Based on the fantastic work of Standford’s Alpaca Project
Fast and customizable
Huge community support
Does require a bit of technical knowledge
Let’s say you are like me and want to get down in the dirt and really PLAY with an LLM AI. However, you don’t want to make a career out of it, and you get to go home at the end of the day. My suggestion to you would be Alpaca-LoRA. Alpaca low-rank adaptation or Alpaca-LoRA is a LLM AI forked from the University of Standford Alpaca Project. Standford set out to make an LLM AI that fixed some of the deficiencies with ChatGPT, like generating false information and toxic language. Stanford released assets to the open-source community, which then created Alpaca-LoRA. Once you have cloned the repo and installed the requirements in Python it’s pretty straightforward to get up and running. I found it to be more descriptive and better able to handle programming challenges than GPT4All. The downside was that everything was handled via the Python Console instead of a nice interface like GPT4All. Not ideal, but this is where the amazing community support comes in. The GitHub repo has a resource section for all the projects Alpaca-LoRA has spawned. If you want a ChatGPT-style interface, someone has created that. Maybe you need Alpaca-LoRA in Spanish? Someone has done that too. The open-source community has embraced Alpaca-LoRA to the point that a leaked memo from Google states that they are falling behind the open-source community. This is the model I ended up going with at home. If you don’t mind getting your hands dirty, this is the model to pick. It’s not as easy to set up as GPT4All, but it has many more features.
Not an LLM but rather a large site hosting thousands of models
Models large and small are available
Try before you download feature
It can be a bit overwhelming
For this last one, I had difficulty narrowing it down to a specific model or program. Instead of just picking one, I’ll let you decide. HuggingFace is the place to go if you want to learn or play with machine learning. Most of the LLM Ai’s you can play with today started on this site. You can find everything here, from conversational, image to text, text to video, object detection, and more. The best part is that most projects allow you to play with them before downloading anything. If you are looking for the best LLM Ai’s, I suggest starting here. Huggingface isn’t just for grabbing the latest and greatest; it’s also a great place to learn about machine learning and language models. I often picked up on what is happening in Ai just browsing the site. It can be a bit overwhelming browsing the site, but there is no better place to discover new Ai models.
Conclusion
I hope you found this helpful; I learned much from researching this article. I honestly hope that the open-source community continues pushing the boundaries of Ai. I would much rather have a future where everyone can access these models than those who can afford them or are locked away in some company’s data center. Until next time!
“The Linux philosophy is ‘Laugh in the face of danger’. Oops. Wrong One. ‘Do it yourself’. Yes, that’s it.”
Linus Torvalds – Creator of the Linux Kernel
Is Windows the Answer?
MicrosoftWindows has been a part of my entire life. I grew up with it at home, at school, and later on at work. When I reached the end of high school, I had a life goal: to work for Microsoft. The only time I used Linux was when I needed to bypass security controls on our home computer so that I could game when I was supposed to be doing my homework. However, since the release of Windows 8 and the interface changes Microsoft continues to push; I decided to change my daily driver to Linux. As you will find out, it wasn’t that simple.
Why?
Since Windows 8, Microsoft has been pushing an update to its interface and dumping any interface that still looks like it was built in Windows 98. While some of these changes have been great, many have been terrible. The list is pretty long, but here are the top things that have pushed me over the edge:
Windows 11 has decided to hide context menus. If you right-click a file, you must click more options to see what you want. (Whoever thought this was a good idea….Shame)
Windows 10 and 11 are trying to do away with Metro UI from Windows 8. However, there are still Metro UI elements in Windows 11, on top of the new UI from Windows 10. Hell, there are still UI elements from Windows 98.
The endless push to get you to sign in with a Microsoft Account instead of a local account
Targeted Ads – Tracking telemetry
Ads in the start menu
The amount of bloat being shipped in standard Windows installs.
General lack of cohesion
Forcing Windows Server to use the same UI as consumer Windows.
I’ve stayed with Windows mostly because it’s still one of the most used operating systems in the world and its gaming credentials. While I use Linux more and more at work, most of what I do at home is on Linux. I love to game on my PC, and for the longest time, Windows was the only way to game on a PC. That changed recently with the release of the Valve Steam Deck. The Steam Deck runs Linux with a compatibility layer called Proton that allows you to play Windows games on Linux easily. Proton isn’t new. It’s a supercharged version of the compatibility tool called Wine. I’ve used Wine in the past, and while some things worked well, it was always a bit janky and didn’t always work. After getting my Steam Deck, I realized that times have changed, and maybe it was time to give Linux another shot.
Arch Linux
Setup
I have two computers at home. A gaming computer I built myself and a laptop I use for gaming and work. Since I know I will need at least one computer with Windows, I decided to trial-run Linux on my laptop. This was my first mistake, as some laptops are better suited to Linx than others; I’ll get to that in a minute. After deciding to use my laptop, it was time to pick a distro. In the past, I have usually stuck with Debian-based distros like Ubuntu or Mint, but I wanted to try something fresh. When it comes to Linux, they usually come in two different flavors, Point Releases (LTS) and Rolling Releases. Point Releases or Long Term Support releases are usually distros like Ubuntu or Fedora that release big updates and drivers once or twice a year. Point Releases have been the gold standard since Linux was made, but in the last few years that has changed. Rolling Releases are distros that update as soon as a driver or update is released. They are usually cutting-edge and have all the latest and greatest features. Arch Linux is one of those distros and has been growing in popularity over the last six years to the point it is one of the most popular distros around. I tried it a couple of times in 2015 and struggled with it. However, I wanted to try it again because most users who game on Linux swear by it. Instead of installing true Arch Linux, I decided to go with a distro called Manjaro. It’s a more user-friendly Arch Linux and has a lot of built-in scripts to get Steam up and running for gaming. I will be installing it on the following:
Asus G15 Laptop 3070ti AMD 5900HS 32GB Ram
Logitech MX Master Mouse
Installation
Manjaro Iinux Running
Unlike the command line installer that comes with Arch Linux, Manjaro comes with a simple-to-use interface to get everything set up. It was no different than setting up Ubuntu. After installing, I was greeted with a nice desktop interface. That was when the trouble began. While everything worked, my Bluetooth mouse did not. I have a Logitech MX master mouse which I love. For whatever reason, it would not show up in the Bluetooth menu. Per the Arch documentation, it should just work, but it just wouldn’t. Looking around on Reddit and Manjaro forums, I found this thread about installing different Bluetooth managers. At this point, we went off the rails. By testing some of these out, I destroyed the package manager and could not install any packages. At this point, I spent about 2 hours trying to get my mouse working and was incredibly frustrated. I had seen a post earlier that said Manjaro wasn’t a true version of Arch Linux with all the under-the-hood changes they made. I decided to try Arch and see if I would have better luck.
He did not have better luck.
Narrator
Arch Linux comes with nothing. It’s a minimalist Linux system and doesn’t come with anything. It gives you enough tools to get up and running; the rest is up to you. I installed a GUI, got the OS up to date, and got display drivers running. Arch doesn’t come with Bluetooth support. You have to install the Bluetooth stack. There are many versions you can pick from, but I went with the default utility package. This is where I ran into almost the same problem. The mouse would pair this time but wouldn’t control the screen. I spent another hour on this before I closed my laptop and just walked away. The next day after doing some research, I found some very interesting things:
Some laptops are more Linux-compatible than others. Gaming Laptops have a lot of custom firmware to control the fans, RGB lighting, and other system resources. This software is usually written for Windows and has no Linux-provided support.
Without knowing it when I started, I had picked hard mode to get Linux installed on my laptop. During my late-night search, I stumbled on the folks over at asus-linux.org. This team of developers has been working on getting Asus Laptops working on Arch Linux and Fedora. Their guide specifically calls out not to install Manjaro on your laptop due to multiple compatibility issues. While they have a very straightforward guide to Arch Linux, the guide that caught my eye was the one for Fedora. Fedora has been around a long time, and while it may not be bleeding edge, it does try to be a middle ground between Arch and Ubuntu. I have used it before, and I am a lot more comfortable with it than Arch.
Fedora Running Gnome
Installing Fedora 37 is very straightforward. I had zero issues getting everything up and running. While I have no love for the Gnome interface and its touch-centric design, unlike Windows, I can change it to whatever I want. Bluetooth worked without issues, my mouse paired, and all the hotkeys worked. The Fedora guide was straightforward, and getting the Nvidia drivers to work was a breeze. My only issue was that booting from a hibernated state can take about 1 minute to boot. This issue concerns the Sabrent NVMe drives; developers say it will be fixed. Before I get into my day-to-day driving of Fedora, I need to take a minute and call out Nvidia.
Nvidia
Unlike Intel and AMD, Nvidia does not open-source its drivers for Linux. They do provide a blob that you can run, but in almost all distro’s you need to do special changes under the hood to get them to work without breaking your whole system. The open-source equivalent of this is a package called Nouveau. The developers for this package, with little to no support from Nvidia have been hacking and patching support on Linux. It works but it’s never been great. If I had gotten a laptop with Intel CPU/AMD GPU or AMD CPU/AMD GPU I would have had little to no issues running in Linux. While Nvidia has stated they will partially open-source their driver for Linux, the progress has been very slow. If you plan on moving to Linux to game in the future, just be aware that Linux gets treated like crap compared to Windows. I hope that changes in the future, and frankly, I am disappointed.
Trial Run
Broadly speaking, running Fedora on my laptop daily has been a breeze. I enjoy seeing daily updates to the kernel and being able to tweak performance at will. Steam and its Proton compatibility work amazingly well. Some games do better than others, but for the most part, I only had a few issues here and there playing games. One of the only major issues is that most Anti-Cheat software doesn’t support Linux. Because of this, most online games don’t work. With older games, like Total War: Rome II, the game would have issues seeing the correct amount of VRAM on my GPU. None of these issues were game-breaking, and I could game without issue. Emulation also worked well, and playing my Nintendo Switch and DS games via emulation was a breeze. While the team over at Asus-linux.org have done a great job of providing 1 to 1 tooling from Windows, it’s not perfect. The tool they use to update RGB doesn’t always work, and despite being able to control the fans, the laptop did run a little hotter than it did on Windows. Overall, when gaming, I only lost 5 to 10 frames per second against Windows. In most games, that wasn’t very noticeable, but in more modern games where every frame mattered, it could be annoying.
Enabling Proton
In terms of productivity, I didn’t have many issues here either. I found tools that would have replaced what I used in Windows. Email was a little bit of a hassle. I use multiple Office365 accounts spread over multiple domains. I have used Thunderbird Email Manager in the past, and while it’s usable, it’s not Outlook. I ended up having to pay a third party to get authentication to work in Thunderbird with Office365. Libre Office is a great 1 to 1 replacement for Microsoft Office. I spend most of my productivity tasks on the web, so using Firefox and Chrome is no different than on Windows. I did have some issues with the Nvidia driver where the laptop would come back from sleep, but the display driver would not. There were lots of complaints about this online about this, and a simple crontab hack was able to fix it. In general, Fedora consumed far fewer resources at boot, and I didn’t have to worry about bloat or Fedora selling my data. One issue I did have was a tool called Remote.it. I use this to connect to my crypto mining warehouse in Montana. I unfortunately have to use this tool because the service provider, StarLink uses Carrier Grade Nat (CGNAT) for its service. CGNAT is used by smaller providers who can’t get ahold of a large enough pool of IPV4 addresses (There is a shortage). There is a great write-up here, but to make it simple, if you use StarLink you will be double NAT and have no way to port forward. Remote.it is a service that allows you to tunnel around those limitations. Unfortunately, they don’t provide an installer for Fedora. My workaround for this was to install VirtualBox and run…..Windows. It was annoying to have to install Windows for one application, but it also solved my email issues. My other issue was that my laptop was a 4k display. While I usually set it at 2k, Linux doesn’t have support for HDR and window scaling. There were a couple of workarounds to get scaling correct, but Linux has a long way to go to support HDR (so does Windows in that aspect).
Notes For The Future
I installed Fedora back in December of 2022. Compared to how things were five years ago, I can already see a future where I no longer use Windows in my day-to-day life. Last month, I purchased a second NVMe 1 TB drive for my laptop as it had a port available. I ended up installing Windows on one drive and Fedora on the other. I spend most of my time in Linux, and I switch to Windows if I need to use a Windows Native application or I want to play a more modern demanding game. If I could go back to December 2022 and give myself some tips, I would probably have said the following:
Buy an INTEL CPU/AMD GPU laptop or an AMD CPU/AMD GPU laptop. Dealing with Nvidia is a pain in the ass.
Rolling releases have tremendous support, but you are beta-testing the software.
Make sure any future laptop you use has basic Linux support. Many laptops these days have special hardware that only works on Windows.
Check to make sure every program you use day to day runs on Linux.
I am pleasantly pleased with how far Linux has come. It still requires that tweaking that it’s so well known for, but if you stick with the mainstream Linux distros, it almost “just works.” Even if I didn’t have an ASUS laptop and I went with installing Linux on my desktop, I think I would have ended up on Fedora. It is such a solid operating system (OS), and even Linus Torvalds, the creator of Linux, uses it as his day-to-day system. If you want to make the switch, I honestly can’t recommend a better OS. Last but not least, if could make some recommendations to Microsoft, I would state the following:
You don’t have to be like Apple. Sure, they are riding high, but all great empires fall. Return to the Windows 7 interface and change everything to match that interface. Upgrade the internals to match Windows 11 (Direct Storage, DirectX Support, built-in Linux, etc.).
If you don’t want to settle on the Windows 7 interface, then stay set on the Windows 10 interface and clear out all the old design elements.
If you want to support handheld or touch devices, let the user choose what interface they want to use at installation. Trying to make an operating system that supports all devices is impossible. Gnome did the same thing with their UI, and it’s almost universally hated.
Focus a little more on gamers. I know they aren’t a big subset of your users, but you will lose them if Linux and Proton continue on their current path. Performance is everything.
I will continue to use Windows, and I am sure Windows 11 will get itself sorted out by Windows 12. In the meantime, I will keep using Fedora and enjoy the experience. For now, Windows is still installed, but if things continue, I will probably drop it entirely in the future.
Authors Note: After writing this post, I stumbled upon the Atas OS project. The idea behind this project is to remove all the bloat from Windows. It was designed to be used on older hardware, but it has already been shown to speed up gaming FPS on modern systems. All it requires is Windows 10. It does have a long list of drawbacks, but if you want to dual-boot a normal Windows OS and a gaming Windows OS this is probably the way to do it.
“When you don’t understand, it’s sometimes easier to look like you do.”
Malcolm Forbes
I have seen an explosion lately of news articles and clickbait saying that language models like Chatgpt will eventually replace all programming-related jobs. In my first AI and our Future Article, I mentioned that programming could certainly be on the job chopping block in the future if these Ai models get stronger. That being said, I need to make it clear that if you are learning programming or are a programmer your future jobs or job are safe.
The Crutch
In part one of my post I talked about foundational knowledge. As Ai grows, humanity will use it more and more as a crutch to the point that we lose the knowledge of how things actually work. This applies to programming as well. I admit that I have used Chatgpt to write short scripts I need or to create a loop for me so I can focus on some aspect of my code that is harder. I am knowledgeable enough in Powershell and Python to understand what Chatgpt is supplying me with. But if I ask Chatgpt to write me a script in C++ in which I have no in-depth knowledge of the code Chatgpt is supplying me with, I just know it does what I asked, is a good thing? Chatgpt is very good at writing code but if you don’t understand the code itself how do you know it’s secure? How do you know if that is the right way to write it?
It’s kind of like using Google Translate. You can put in English sentences and get out Spanish sentences but how do you know it’s correct? It will probably get your point across but you don’t truly understand its output. If I am making a website or program I want to know that it’s secure and truly works. I can certainly make Chatgpt build the whole thing for me but I will have no understanding of how it works. I would much rather have a developer who understands what they are doing. I don’t care if they use Chatgpt as long as they understand what is being written.
Ai isn’t the end all be all. It’s a useful tool that will certainly help developers speed up their development time but it won’t outright replace them. If humanity doesn’t want to lose its foundational knowledge, then there will always be a need for someone who understands coding languages at their core.
Authors Note:
This article also applies to translators. I am also still working on part 2 of AI and our Future. Should be released soon!
“Artificial intelligence and machine learning, as a dominant discipline within AI, is an amazing tool. In and of itself, it’s not good or bad. It’s not a magic solution. It isn’t the core of the problems in the world.”
Vivienne Ming, executive chair and co-founder, Socos Labs
From the Movie The Good, the Bad, the Ugly
In this multi-part series, I will be discussing Artificial Intelligence (AI) and its effect on humanity and our future. A lot has already been written about AI and its effects, especially with recently released tools like ChatGPT and Stable Diffusion. However, much of what has been posted, liked, upvoted, and recycled on the internet involves fear and clickbait. In part 1 of this series, I will talk about the good, the bad, and the ugly side of our AI future.
The Good
There are many things that AI and the computers they are built on can do better than humans. Number crunching, holding information, automation, and pattern recognition. As the computers we build become faster and faster, the AI we build on top of them becomes smarter and smarter. While there is much to be afraid of, there is a lot to be optimistic about as well. For example, let’s talk about AI in healthcare and medicine. In 2022, my wife was experiencing lower back pain that would not go away. It continued to the point that she could not lay in bed and would be curled up on the floor in extreme pain. After a couple of trips to the emergency room and an MRI scan, the hospital found that a spinal disk had bulged out and was compressing her spinal cord. This type of injury is called Cauda equina and it is an emergency medical situation. The closest hospital to us is small but it does serve a retirement community that requires a lot of back-related surgeries. Because of this, the hospital had recently hired a new surgeon and bought a new AI-powered, robotic arm for back surgeries called Excelsius GPS. The arm was designed to assist doctors during surgeries by improving placement (cuts), reducing the need for radiation imaging, and decreasing operating time. Without that AI-assisted machine and her surgeon, she would have had to be flown to larger hospital hours away. After the surgery, the surgeon stated that if she had been airlifted she would most likely have been paralyzed from the waist down.
ExcelsiusGPS in action. https://www.globusmedical.com/musculoskeletal-solutions/excelsiustechnology/excelsiusgps/
Not only can AI help assist doctors in surgery but it can also help in one of the hardest parts of medicine, diagnosing a patient. How many times have you gone to a doctor’s office with an aliment and it takes multiple blood tests or imagining to figure out what your aliment is? In 2018 an AI called Biomind beat a team of doctors in Beijing in diagnosing brain tumors and hematoma expansion. It was able to look at brain scans and diagnose correctly 87% of the time vs the human doctors who were only able to diagnose 66% correctly. In 2020, a state-of-the-art associative AI was pitted against 44 doctors in a test set of 1671 real medical cases. The AI was able to diagnose 77.26% correctly while the doctors were only able to diagnose 71.40%. Imagine a world where you can type in your symptoms and get a reasonable diagnosis within minutes. Or how about an implant that can monitor your body and alert you instantly to a medical emergency or a growing tumor?
AI-generated warehouse robot
What about AI in productivity and manufacturing? A company I worked for wanted me to head out to their warehouse and map out and hang wireless access points. They wanted this because they were setting up an AI-assisted warehouse system to pick, pack, and ship products. The goal is that it would increase product tracking and product movement from their warehouse to the customer. Hell, the access points we went with had AI built into the software to help with dropouts and signal optimization. In the service industry, tools like ChatGPT can handle routine customer support requests, freeing up employees to handle more complex tasks. Developers and software engineers can speed up coding by having AI write the mundane structures of the code while they handle the more complex algorithms. Companies like Microsoft, Salesforce, and Google are using AI to help users write emails and generate marketing content.
One of the most valuable finite resources in the world is time. The time we have on this earth is finite (at least for now) and AI has many benefits in assisting our lives and giving us back time. That being said AI could certainly come to the point that it gives people too much time by taking over entire industries and jobs. This leads us to…
The Bad
Right now we live in an AI-assistive world. I use it for spell-checking and article flow. I use it to check my code and format it correctly. I use it for image generation and even SEO. So what happens when AI assistive becomes AI controlling? Let’s start with the fat elephant in the room which is Chatgpt created by OpenAi. ChatGPT is a:
.. a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
OpenAi
If you haven’t used it I would highly suggest going over and having a conversation with it. I myself have used it countless times since its release to test its limits and to assist me in tasks. That being said companies are already replacing customer support roles with ChatGPT. As time goes on here are some more jobs that could very easily be replaced in the near future:
This list isn’t exhausting but it does paint a rather scary picture. I have no timeline on when any of this could take place but it will happen. Before refrigeration, there used to be a profession of ice cutters who would cut ice and deliver it to storage houses. When refrigeration was invented I am sure the outcry was vocal and loud. Jobs will certainly be created to manage and design AI but it won’t happen overnight and it’s hard to tell if it will create more than it destroys.
What about AI creating inequality and discrimination? AI isn’t free. It requires powerful hardware and software to make it work. While Stable Diffusion which can generate text to image and ChatGPT are free, ChatGPT is already selling a paid service. Almost every major tech company is planning to offer AI to the masses, most will have a paid service of some sort that will offer more features and speed. How does someone with little means compete with someone who can afford it and is assisted by AI?
AI can also be discriminatory. Most machine learning is trained on data: AI models are only as good as the data they are trained on. If the training data is biased or incomplete, the AI model will make biased predictions. For example, if an AI model is trained on historical data that includes discriminatory patterns, the model may perpetuate these biases by making unfair decisions. AI can also be inherently biased, even if the training data is unbiased. This is because the algorithms are designed by humans who may unconsciously embed their own biases into the algorithms. For example, an AI algorithm designed to screen job applications may discriminate against certain groups of people if the algorithm is designed to favor certain educational backgrounds or work experiences that historically disadvantaged these groups. AI is only as good as the data it was created on and the people who created it.
https://xkcd.com/2347/
I debated putting this last part in the ugly category because I already see it happening in our day-to-day lives. AI is becoming a crutch and it can certainly lead to our loss of certain human skills in the future. Most of us would struggle to get somewhere new without GPS and our favorite map apps. When I cook I constantly ask Alexa to set cooking timers for me. We use it without knowing when we are searching for information and troubleshooting problems. If you ever have some free time and like reading science fiction then I suggest you read The Foundation Series by Isaac Asimov. In the story, humanity is at its apex in all things. The Empire spans the galaxy and has been stable for 12,000 years. However, the Empire and its citizens rely on technology to the point that the foundation of knowledge that got them there is lost. At one point in the story, a colony makes a deal with four more powerful nations because they understand nuclear power and the other nations do not.
AI could become a crutch to the point that we lose foundational knowledge. For example, let’s say I am writing a program that will pull IPv4 addresses out of a random list of IPv4 and IPv6 addresses. I get stuck and so I ask ChatGPT to write me a regex command for IPv4. In seconds it supplies me with the code and I am on my way. Is that helpful? Yes. Do I understand the code it gave me? No. It is one thing to ask a question and something else entirely to understand the answer. Even with job displacement, economic and social inequality, discrimination, and loss of human skills, there are still things AI can do that are even worse. This leads us to…
The Bad
With great power comes great responsibility.
Stan Lee – The Amazing Spiderman
Like almost anything AI can be abused. It can be used to deceive you, hurt you, and even one day kill you. AI is a tool and in the wrong hands, a tool can just as easily become a weapon. I work as a security engineer and every day I have to stay on top of new threat intelligence. Maybe a piece of software or hardware has a flaw that needs to be patched or a hostile actor is sending our employees phishing emails to steal their credentials. It’s a never-ending game of wack-a-mole. So it disappointed me to hear that people with little to no coding experience were writing ransomware using AI. These people were having chatGPT write ransomware that if run on a user’s machine will look for specific files and encrypt them for extortion. Now, imagine someone with actual experience using the tool.
Created using Stable Diffusion 2.1
Speaking of extortion and blackmail what about AI image generation? A tool that has been in the news a lot lately, Stable Diffusion, can take text and generate an image. It requires no skills other than typing out what you want in as much detail as possible. You can also upload an image and have it changed for you. Imagine, a teenager is mad that a girl rejected him so he goes on Instagram and downloads an image of her. He downloads an edited version of Stable Diffusion that allows adult content to be generated and removes her clothes. He then sends it to all his friends.
It’s not just images that AI can generate but voice and video as well. A couple in Canada lost $21,000 after a scammer called them using their son’s voice. So much of our data is out on the internet. It doesn’t take much for a scammer to piece enough together to potentially ruin your life. Just like in the real world where one tool is created and another one is created to counter it, you cannot put this genie back in its box. It will be a constant game of wack-a-mole to defend against these scams and malware.
AI cannot be classified as simply good, bad, or ugly. Rather, it is a complex issue that requires careful consideration and management to ensure that AI is used in a way that maximizes its benefits while minimizing its potential negative consequences. Right now, we have a chance to figure out the best way to bring AI’s good to the world without the bad and the ugly immensely hurting us in the process. Personally, I hope that it will bring about an evolution in humanity and not bias revolution that hurts more than it heals. Time will tell.
I and some family members own a small mining farm out in Montana. We have over 50 of these Jasminer X4 miners. Jasminer burst onto the mining scene last year with highly efficient ETC/ETH miners. The X4 we purchased comes in a 1u form factor and barely draws 300 watts from the wall. With the current crypto winter ongoing, I decided to see if hacking the Jasminer X4 is possible and will it go faster?
The Jasminer X4-1u
Jasminer is a subsidiary of a company in China called Sunlune. Sunlune’s first chip, the X4, is an FPGA with 5Gb of built-in memory, 1Tb Memory transfer speed, consumes 23 watts, and is designed to generate 65Mh/s per chip. The Jasminer x4 has 8 of these chips in a 1u form factor and can produce 520 Mh/s a second. It has a Zynq-7000 Programmable Soc daughter board to drive these eight chips. Lastly, to power all this, it has a 300-watt 1u power supply from Wingot. It’s a solid product, but it does have some shortcomings.
Power
The power supply in the X4 is a very cheap 300-watt unit from Wingot. Searching online doesn’t reveal much about the product, but the company does exist. The problem with this power supply is it barely supplies enough wattage to run the Jasminer. As seen in the photo above, when under full mining load, this power supply is on the razor’s edge of being at capacity. We’ve already had two power supplies fail, and I know we can do better at $100 a pop per replacement power supply. So let’s open her up and see what’s inside.
Internals and Swap out
Inside everything is pretty bog standard except for the power supply. The plug for the X4 is just another computer power cable that has been split and runs back into the actual power supply. Not sure why they didn’t flip the internals around 180 degree’s so that you plug directly into the unit. Also, every unit we have received is usually missing one fan. Why I have no idea, but maybe to cut some costs. The good news is that the power supply can be removed and needs to be replaced with a unit with three 6-pin power connectors. It just so happens I have a cheap RAIDMAX Vortex 600-watt power supply lying around. I swapped out the old one with the new power supply, but it did not automatically power on when I plugged it into the wall. I could jump the power supply and get the X4 up and running using a paperclip. So yes, you can swap the cheap Wingot power supply with a slightly cheaper 400-watt unit. With a 600-watt power supply inside, can it go faster?
Overclocking and Hacking
Just a quick note before I did any more testing, I moved the unit outside. It’s safer if anything blows up, and it’s also colder, sitting at 16 degrees Fahrenheit. Right off the bat, we will run into some limitations in how far we can push this unit. The two 6-pin PCIe connections can deliver 75 watts each, which means both combined can deliver 150 total watts. However, that 75-watts is with a very healthy safety margin. Without going too crazy, we should be able to supply a little more juice to the eight chips. First off, we need to get into the machine. Most ASIC manufacturers have disabled SSH via password and enabled public key only. From a security point of view, this is great, and I greatly approve; however, I would like to overclock when I want. Luckily for us, Jasminer enabled SSH via password and used the same username and password as the web interface!
Now that I am in, I need to figure out how it all works. I know the Jasminer runs a web interface, and I can set the frequency of the chips to either 200 or 225. However, looking in the usual places for a web server has come up empty. Instead of blindly looking around for files, why don’t I run top and see what is running? Running the Top command bore a lot of fruit. I know the primary x4 process is called jasminer, where the configs are located (/media/configs), and that it runs lighthttpd for a web server—looking at the lighthttpd config I have found that the webserver files are located in the /www/pages directory. Here is where things get interesting. When opening the pools.html file in VI, it looks like Sunlune initially allowed a frequency of up to 250 but commented the code out in production. It seemed like a simple test, so I removed the commented-out line and opened up the pool page to find that 250 was now an option.
Unfortunately, setting the frequency to 250 and applying was not to be. Once you click apply, it reverts to 200. Nothing is reported in the logs, so now to figure out where that 250 value is posted too. At the top of the pools, the file is a CGI call or common gateway interface. A CGI call allows a website to interact with an application. This call makes a post to an executable shell file called set_pools.sh. Opening that file, I find the problem immediately. The bottom of this file has an if statement that says if the value does not equal 200 or 225 set the value to 200. A quick edit removing this code should allow my 250 value to pass through. As you can see in the images below, with those two changes, I was able to overclock the unit an additional 25 to 250.
Results and Conclusion
When I made the changes above, the unit was already mining. The power usage jumped to 350 watts, and the miner crashed. I see why Sunlune disabled that setting with only a 300-watt power supply. I restarted the process and attempted to see the average hash rate. The results are below:
Stock
overclock
Freq
225
250
Watts
270-300
340-370
Mh/s
450-500
500-560
Temp (c)
37-45
40-50
While there is an improvement, it’s not great. Jasminer’s web interface does not show rejections, but in the miner logs, rejections did increase. Hashrate also fluctuated wildly. Another downside is that the edits to the HTML/CGI files get reverted to their original state after every reboot. However, the overclock stays in place, and this is because all that is being changed is the frequency value in the jasminer.conf file. Putting in higher or lower values will throw an error and default to 200. There is quite a lot of unused code lying about the system. In the future, I may release custom firmware for those who want to have a bit more control over their systems. Despite the poor overclock performance it’s good to see that the power supply can be swapped out for something a little more rugged. Until next time!
The Internet is becoming the town square for the global village of tomorrow.
Bill Gates
Right now, you are on a website hosted in the Cloud. Specifically, this website is hosted on Amazon’s AWS platform. There is a high probability that you were using an app on your phone hosted on Google Cloud or browsing a website running services from Microsoft Azure. Almost everything you do online is hosted in the “cloud.” Is that a good thing, and how did the consuming Cloud take over the internet?
The Cloud
The word Cloud gets thrown around a lot and is interchangeable in many ways. The Cloud comes down to this: The Cloud is someone else’s infrastructure you are using. Before the Cloud and even modern data centers, you had to purchase the hardware and run it yourself if you wanted to put something on the internet. If the application you wanted to run was business-critical, this would require a lot of redundant hardware and thus would be expensive. Not only was it costly, but it was also time-consuming to set up and manage. If you didn’t provision your hardware correctly and the company suddenly experienced a surge of users, there wasn’t much you could do until more hardware could be purchased and brought online. The answer to this and the precursor to the Cloud was co-location. Instead of running your own data center, you could take your hardware and run it in someone’s data center. Co-location took the management out of managing a data center. Companies no longer had to construct a location and hire employees to monitor their hardware.
Now, if a company needs a server fixed or more capacity for their applications, they need to fill out a ticket with their hosting company, and the hoster gets it done in an hour or two. In most cases, companies didn’t even need to purchase hardware as they could lease whatever was required from the hosting company. It wasn’t perfect as there was usually a lag between sending a ticket in to troubleshoot something and that something getting fixed. There were also certain levels of service a colo could provide. The more you paid, the faster the service you received. These service level agreements and muti-tenant data centers popped up all over the world. This structure worked from the 90s to the early 2000s.
Marketing and NASA
In 2002 Amazon started a subsidiary called Amazon Web Services. Shortly after, they released a service called S3 or Simple Storage Service. S3 underpins a staggering amount of the internet but simply put it is a file hosting service. Shortly after, they released a service called EC2 or Elastic Cloud Compute, which allows anyone to click a button and spin up a virtual server in an Amazon data center. This virtual server isn’t new technology; being able to emulate multiple smaller computers inside a larger one has been around since the late 1960s. The difference was the software, mainly the web interface Amazon created to spin up servers. Companies and developers now could instantly spin up infrastructure in minutes. You could programmatically add more servers if your website suddenly experienced more load.
Generated using AI
Cloud computing kicked into high gear when NASA and Rackspace created Nebula. Nebula was a federal government cloud computing program designed to run government projects in a private cloud. It would later go on to become Openstack. I will swing back around to Openstack, but it allows anyone to create their own personal/public Cloud using their hardware. By 2010, Rackspace and OVH had gone from hosting providers to cloud-provider businesses. Today almost everyone interacts with the Cloud. Most apps and software now run natively in the Cloud or across multiple cloud environments. Cloud computing has enabled minor developers to the most prominent companies to deploy the infrastructure required to run their apps quickly. Some cloud environments are even branching out beyond computing. Amazon recently released Ground Station, which allows you to control satellite communications to and from your orbiting satellite. Despite all these benefits, as the major cloud computing companies continue to grow, the internet becomes more decentralized. This leads to some significant national security risks.
Centralization
It happens suddenly. You are browsing Facebook and the page won’t load. Your internet connection is fine, so maybe the site is just down. So you head over to your favorite site about gaming and find that it is down. Checking Twitter shows that multiple sites are down due to an outage in one of the major cloud providers. It’s straightforward to think that because your website is hosted in the Cloud on redundant machines, it’s almost immune to all outages. Just like any piece of technology, things break. Data centers have hardware failures, fiber lines get cut, tornados cut power, and earthquakes knock buildings off their foundations. Cloud providers are not immune to these things. Redundancy is not a guarantee when hosting your stuff in the Cloud. Amazon Web Services even points out in their onboarding documentation that if you host all your services in one region, your services are not redundant. (This applies to most major cloud providers.) The simple solution would be to spin up a secondary environment in a different region, right? Sure, but that means you just doubled the costs of running your services. Cloud computing has undoubtedly lowered the cost hurdle, but it can get expensive quickly if you don’t manage costs. As an engineer, I have seen multiple bills from AWS exceeding $1 million a month.
Patrick Hertzog via Getty images – OVH Data Center Fire
Despite this, the ease of use has allowed the big three (Microsoft, Amazon, Google) to absorb many popular websites and applications in the United States and Europe. This has also allowed them to buy out many of the smaller data centers across the country. This centralization of the internet into a handful of cloud computing companies has become an Achilles heel.
Pressure Point
My job and what I do is informational and infrastructure security. Being a security engineer sometimes bleeds into my personal life, and when I look at certain things, I look at them from a security standpoint. Where are its weak points, how can I meditate risk, and how would I break in? When I look at the growth in cloud computing and the number of businesses that rely on them, it scares me. So much implicit trust from POS vendors, wireless vendors, credit card companies, hospitals, and banks that the Cloud will always work. That the Cloud is secure. I am telling you it’s not. You can have the best cloud architect set up the most secure, reliable website on AWS or Azure, but all it takes is for one employee at either of those companies to get popped, and it’s game over. All it takes is one bug in code or a misconfigured edge firewall in Google or Amazon, and it’s over. The difference before was if a hacker got into your data center or a natural disaster took it out, it just affects your business. If any of these large companies get taken out, hundreds if not thousands of businesses get taken offline.
The Northeast Blackout of 2003
It’s not just the digital bugs we should be worried about but the physical ones as well. As we have seen with the Russian Invasion of Ukraine, infrastructure is fair game. I won’t get too much into the weeds on the need for more protection of US public infrastructure, but I will add private infrastructure needs protection as well. Take out a couple of major data centers in the United States, and you will damage its service-based economy. So much of what we do day to day is spent online. Most of the applications I pay for are hosted online in the cloud. Knock enough of them out and it all falls apart very quickly.
Decentralization
I have preached that decentralization is excellent when it makes sense. In this case, I think it fits perfectly. Organizations like OpenStack are a great place to start. More companies should have their own Hybrid Private Cloud, where data is hosted both privately and in a public cloud. Some crypto-related projects even want to network hardware from across the globe into one giant global cloud network. While I love the ease of use that comes with the Cloud, I do believe in the saying that putting all your eggs in one basket is a bad idea. I would be willing to bet that we will see a significant outage across one of the larger cloud providers in the next ten years. That outage may help businesses understand that sometimes running some of their own infrastructures is the way to go. I certainly don’t want something terrible to happen to anyone’s livelihood, but if something were to happen, I would rather not see a third of the internet go dark.
My last site was hosted on a cheap 3rd party provider and one night without warning they decided to close up shop. I have offsite backups but honestly this site was getting a little clunky. So I have decided to completely redo the site and start fresh. On this space I will talk about my thoughts and feeling on everything from artificial intelligence, security awareness, and our changing world. Everything I write is my opinion and my opinion only. It does not reflect those of whatever company I have to be working for. I hope you get something out of this space. Enjoy!