“The future of AI is bright, and it will continue to revolutionize the way we live and work. With advancements in machine learning and natural language processing, AI will become even more powerful and ubiquitous in the coming years.”
-GPT4All
As much as I harp on the current hype surrounding AI and its pace of advancements, I do believe there is a place and use for these new tools. While we aren’t going to be replaced overnight, I do expect a big productivity boom from all these tools. One of the most significant drawbacks to the current Large Language Model AI (LLM AI) is that it is controlled by someone else. OpenAI has its popular ChatGPT, Facebook has its Meta AI, Google its Bard, and Microsoft has Bing AI. All of these companies have to make money, so whether through a subscription or selling your data, there is a cost. They are also black boxes in how most of them work. If we want to live in a world where everyone is on equal footing with AI, people need to be able to run them locally. In this post, I will share how you can run your own ChatGPT at home.
When researching this article, I was surprised to find out how many choices there are if you want to run your own LLM AI from home. One of the easiest and quickest choices to getting up and running is from the folks over at GPT4All. They offer a one-click installer to download a Chatgpt like AI onto your Windows, Linux, or Apple Mac computer. Once downloaded, pick the Lanauge Model you want to work with, and presto! I had zero issues setting this AI up and was asking it questions in minutes. Despite its simplistic interface, its settings menu has a lot of knobs you can play with your tweak your output. It also has one of the largest selection of language models to choose from. While it doesn’t require a GPU, it can tax your CPU, and responses are a little slower depending on what you are running it on. Overall, it is the easiest experience to set up and doesn’t require any real technical knowledge. The community support is great if you run into any issues as well.
Based on the fantastic work of Standford’s Alpaca Project
Fast and customizable
Huge community support
Does require a bit of technical knowledge
Let’s say you are like me and want to get down in the dirt and really PLAY with an LLM AI. However, you don’t want to make a career out of it, and you get to go home at the end of the day. My suggestion to you would be Alpaca-LoRA. Alpaca low-rank adaptation or Alpaca-LoRA is a LLM AI forked from the University of Standford Alpaca Project. Standford set out to make an LLM AI that fixed some of the deficiencies with ChatGPT, like generating false information and toxic language. Stanford released assets to the open-source community, which then created Alpaca-LoRA. Once you have cloned the repo and installed the requirements in Python it’s pretty straightforward to get up and running. I found it to be more descriptive and better able to handle programming challenges than GPT4All. The downside was that everything was handled via the Python Console instead of a nice interface like GPT4All. Not ideal, but this is where the amazing community support comes in. The GitHub repo has a resource section for all the projects Alpaca-LoRA has spawned. If you want a ChatGPT-style interface, someone has created that. Maybe you need Alpaca-LoRA in Spanish? Someone has done that too. The open-source community has embraced Alpaca-LoRA to the point that a leaked memo from Google states that they are falling behind the open-source community. This is the model I ended up going with at home. If you don’t mind getting your hands dirty, this is the model to pick. It’s not as easy to set up as GPT4All, but it has many more features.
Not an LLM but rather a large site hosting thousands of models
Models large and small are available
Try before you download feature
It can be a bit overwhelming
For this last one, I had difficulty narrowing it down to a specific model or program. Instead of just picking one, I’ll let you decide. HuggingFace is the place to go if you want to learn or play with machine learning. Most of the LLM Ai’s you can play with today started on this site. You can find everything here, from conversational, image to text, text to video, object detection, and more. The best part is that most projects allow you to play with them before downloading anything. If you are looking for the best LLM Ai’s, I suggest starting here. Huggingface isn’t just for grabbing the latest and greatest; it’s also a great place to learn about machine learning and language models. I often picked up on what is happening in Ai just browsing the site. It can be a bit overwhelming browsing the site, but there is no better place to discover new Ai models.
Conclusion
I hope you found this helpful; I learned much from researching this article. I honestly hope that the open-source community continues pushing the boundaries of Ai. I would much rather have a future where everyone can access these models than those who can afford them or are locked away in some company’s data center. Until next time!
“When you don’t understand, it’s sometimes easier to look like you do.”
Malcolm Forbes
I have seen an explosion lately of news articles and clickbait saying that language models like Chatgpt will eventually replace all programming-related jobs. In my first AI and our Future Article, I mentioned that programming could certainly be on the job chopping block in the future if these Ai models get stronger. That being said, I need to make it clear that if you are learning programming or are a programmer your future jobs or job are safe.
The Crutch
In part one of my post I talked about foundational knowledge. As Ai grows, humanity will use it more and more as a crutch to the point that we lose the knowledge of how things actually work. This applies to programming as well. I admit that I have used Chatgpt to write short scripts I need or to create a loop for me so I can focus on some aspect of my code that is harder. I am knowledgeable enough in Powershell and Python to understand what Chatgpt is supplying me with. But if I ask Chatgpt to write me a script in C++ in which I have no in-depth knowledge of the code Chatgpt is supplying me with, I just know it does what I asked, is a good thing? Chatgpt is very good at writing code but if you don’t understand the code itself how do you know it’s secure? How do you know if that is the right way to write it?
It’s kind of like using Google Translate. You can put in English sentences and get out Spanish sentences but how do you know it’s correct? It will probably get your point across but you don’t truly understand its output. If I am making a website or program I want to know that it’s secure and truly works. I can certainly make Chatgpt build the whole thing for me but I will have no understanding of how it works. I would much rather have a developer who understands what they are doing. I don’t care if they use Chatgpt as long as they understand what is being written.
Ai isn’t the end all be all. It’s a useful tool that will certainly help developers speed up their development time but it won’t outright replace them. If humanity doesn’t want to lose its foundational knowledge, then there will always be a need for someone who understands coding languages at their core.
Authors Note:
This article also applies to translators. I am also still working on part 2 of AI and our Future. Should be released soon!
“Artificial intelligence and machine learning, as a dominant discipline within AI, is an amazing tool. In and of itself, it’s not good or bad. It’s not a magic solution. It isn’t the core of the problems in the world.”
Vivienne Ming, executive chair and co-founder, Socos Labs
From the Movie The Good, the Bad, the Ugly
In this multi-part series, I will be discussing Artificial Intelligence (AI) and its effect on humanity and our future. A lot has already been written about AI and its effects, especially with recently released tools like ChatGPT and Stable Diffusion. However, much of what has been posted, liked, upvoted, and recycled on the internet involves fear and clickbait. In part 1 of this series, I will talk about the good, the bad, and the ugly side of our AI future.
The Good
There are many things that AI and the computers they are built on can do better than humans. Number crunching, holding information, automation, and pattern recognition. As the computers we build become faster and faster, the AI we build on top of them becomes smarter and smarter. While there is much to be afraid of, there is a lot to be optimistic about as well. For example, let’s talk about AI in healthcare and medicine. In 2022, my wife was experiencing lower back pain that would not go away. It continued to the point that she could not lay in bed and would be curled up on the floor in extreme pain. After a couple of trips to the emergency room and an MRI scan, the hospital found that a spinal disk had bulged out and was compressing her spinal cord. This type of injury is called Cauda equina and it is an emergency medical situation. The closest hospital to us is small but it does serve a retirement community that requires a lot of back-related surgeries. Because of this, the hospital had recently hired a new surgeon and bought a new AI-powered, robotic arm for back surgeries called Excelsius GPS. The arm was designed to assist doctors during surgeries by improving placement (cuts), reducing the need for radiation imaging, and decreasing operating time. Without that AI-assisted machine and her surgeon, she would have had to be flown to larger hospital hours away. After the surgery, the surgeon stated that if she had been airlifted she would most likely have been paralyzed from the waist down.
ExcelsiusGPS in action. https://www.globusmedical.com/musculoskeletal-solutions/excelsiustechnology/excelsiusgps/
Not only can AI help assist doctors in surgery but it can also help in one of the hardest parts of medicine, diagnosing a patient. How many times have you gone to a doctor’s office with an aliment and it takes multiple blood tests or imagining to figure out what your aliment is? In 2018 an AI called Biomind beat a team of doctors in Beijing in diagnosing brain tumors and hematoma expansion. It was able to look at brain scans and diagnose correctly 87% of the time vs the human doctors who were only able to diagnose 66% correctly. In 2020, a state-of-the-art associative AI was pitted against 44 doctors in a test set of 1671 real medical cases. The AI was able to diagnose 77.26% correctly while the doctors were only able to diagnose 71.40%. Imagine a world where you can type in your symptoms and get a reasonable diagnosis within minutes. Or how about an implant that can monitor your body and alert you instantly to a medical emergency or a growing tumor?
AI-generated warehouse robot
What about AI in productivity and manufacturing? A company I worked for wanted me to head out to their warehouse and map out and hang wireless access points. They wanted this because they were setting up an AI-assisted warehouse system to pick, pack, and ship products. The goal is that it would increase product tracking and product movement from their warehouse to the customer. Hell, the access points we went with had AI built into the software to help with dropouts and signal optimization. In the service industry, tools like ChatGPT can handle routine customer support requests, freeing up employees to handle more complex tasks. Developers and software engineers can speed up coding by having AI write the mundane structures of the code while they handle the more complex algorithms. Companies like Microsoft, Salesforce, and Google are using AI to help users write emails and generate marketing content.
One of the most valuable finite resources in the world is time. The time we have on this earth is finite (at least for now) and AI has many benefits in assisting our lives and giving us back time. That being said AI could certainly come to the point that it gives people too much time by taking over entire industries and jobs. This leads us to…
The Bad
Right now we live in an AI-assistive world. I use it for spell-checking and article flow. I use it to check my code and format it correctly. I use it for image generation and even SEO. So what happens when AI assistive becomes AI controlling? Let’s start with the fat elephant in the room which is Chatgpt created by OpenAi. ChatGPT is a:
.. a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
OpenAi
If you haven’t used it I would highly suggest going over and having a conversation with it. I myself have used it countless times since its release to test its limits and to assist me in tasks. That being said companies are already replacing customer support roles with ChatGPT. As time goes on here are some more jobs that could very easily be replaced in the near future:
This list isn’t exhausting but it does paint a rather scary picture. I have no timeline on when any of this could take place but it will happen. Before refrigeration, there used to be a profession of ice cutters who would cut ice and deliver it to storage houses. When refrigeration was invented I am sure the outcry was vocal and loud. Jobs will certainly be created to manage and design AI but it won’t happen overnight and it’s hard to tell if it will create more than it destroys.
What about AI creating inequality and discrimination? AI isn’t free. It requires powerful hardware and software to make it work. While Stable Diffusion which can generate text to image and ChatGPT are free, ChatGPT is already selling a paid service. Almost every major tech company is planning to offer AI to the masses, most will have a paid service of some sort that will offer more features and speed. How does someone with little means compete with someone who can afford it and is assisted by AI?
AI can also be discriminatory. Most machine learning is trained on data: AI models are only as good as the data they are trained on. If the training data is biased or incomplete, the AI model will make biased predictions. For example, if an AI model is trained on historical data that includes discriminatory patterns, the model may perpetuate these biases by making unfair decisions. AI can also be inherently biased, even if the training data is unbiased. This is because the algorithms are designed by humans who may unconsciously embed their own biases into the algorithms. For example, an AI algorithm designed to screen job applications may discriminate against certain groups of people if the algorithm is designed to favor certain educational backgrounds or work experiences that historically disadvantaged these groups. AI is only as good as the data it was created on and the people who created it.
https://xkcd.com/2347/
I debated putting this last part in the ugly category because I already see it happening in our day-to-day lives. AI is becoming a crutch and it can certainly lead to our loss of certain human skills in the future. Most of us would struggle to get somewhere new without GPS and our favorite map apps. When I cook I constantly ask Alexa to set cooking timers for me. We use it without knowing when we are searching for information and troubleshooting problems. If you ever have some free time and like reading science fiction then I suggest you read The Foundation Series by Isaac Asimov. In the story, humanity is at its apex in all things. The Empire spans the galaxy and has been stable for 12,000 years. However, the Empire and its citizens rely on technology to the point that the foundation of knowledge that got them there is lost. At one point in the story, a colony makes a deal with four more powerful nations because they understand nuclear power and the other nations do not.
AI could become a crutch to the point that we lose foundational knowledge. For example, let’s say I am writing a program that will pull IPv4 addresses out of a random list of IPv4 and IPv6 addresses. I get stuck and so I ask ChatGPT to write me a regex command for IPv4. In seconds it supplies me with the code and I am on my way. Is that helpful? Yes. Do I understand the code it gave me? No. It is one thing to ask a question and something else entirely to understand the answer. Even with job displacement, economic and social inequality, discrimination, and loss of human skills, there are still things AI can do that are even worse. This leads us to…
The Bad
With great power comes great responsibility.
Stan Lee – The Amazing Spiderman
Like almost anything AI can be abused. It can be used to deceive you, hurt you, and even one day kill you. AI is a tool and in the wrong hands, a tool can just as easily become a weapon. I work as a security engineer and every day I have to stay on top of new threat intelligence. Maybe a piece of software or hardware has a flaw that needs to be patched or a hostile actor is sending our employees phishing emails to steal their credentials. It’s a never-ending game of wack-a-mole. So it disappointed me to hear that people with little to no coding experience were writing ransomware using AI. These people were having chatGPT write ransomware that if run on a user’s machine will look for specific files and encrypt them for extortion. Now, imagine someone with actual experience using the tool.
Created using Stable Diffusion 2.1
Speaking of extortion and blackmail what about AI image generation? A tool that has been in the news a lot lately, Stable Diffusion, can take text and generate an image. It requires no skills other than typing out what you want in as much detail as possible. You can also upload an image and have it changed for you. Imagine, a teenager is mad that a girl rejected him so he goes on Instagram and downloads an image of her. He downloads an edited version of Stable Diffusion that allows adult content to be generated and removes her clothes. He then sends it to all his friends.
It’s not just images that AI can generate but voice and video as well. A couple in Canada lost $21,000 after a scammer called them using their son’s voice. So much of our data is out on the internet. It doesn’t take much for a scammer to piece enough together to potentially ruin your life. Just like in the real world where one tool is created and another one is created to counter it, you cannot put this genie back in its box. It will be a constant game of wack-a-mole to defend against these scams and malware.
AI cannot be classified as simply good, bad, or ugly. Rather, it is a complex issue that requires careful consideration and management to ensure that AI is used in a way that maximizes its benefits while minimizing its potential negative consequences. Right now, we have a chance to figure out the best way to bring AI’s good to the world without the bad and the ugly immensely hurting us in the process. Personally, I hope that it will bring about an evolution in humanity and not bias revolution that hurts more than it heals. Time will tell.