Thursday, August 3, 2023

DIY an LLM

I have been playing with Google Bard, a Large Language Model (LLM) service provided by Google. The results have been impressive in some instances but less so in others, probably due in a large part to the prompts which I provided.  For instance, my questions to BARD about the U.S correctional system and prison reform produced concise, well expressed answers.  On the other hand, requests for a report on the histories of Haiti and the Dominican Republic gave me brief and superficial results, much inferior to what I found on Wikipedia on the same subjects.

Some or all of the differences I have seen in the quality of results from BARD are clearly related to my own ignorance about the subjects of Artificial Intelligence and Large Language Models.  Fortunately, simply executed searches using easily accessed services like Google Search will turn up useful entry points for learning about these subjects.  For instance, a search on the terms, "LLM Portability", turned up a good summary of LLM capabilities which in turn suggest additional lines of inquiry.

I decided as a next step to use the BARD LLM to get some quick answers about LLM operational minimums which provided me with the following:

* * *

(Me)  What is the minimum hardware required to run an LLM?


(Bard)  The minimum hardware required to run an LLM depends on the size of the model. For smaller models, you can get away with a standard desktop computer with 16GB of RAM and a decent CPU. However, for larger models, you will need more powerful hardware, such as a server with 128GB of RAM or more and a high-end GPU.

Here are some general hardware recommendations for running LLMs:

RAM: 16GB for small models, 128GB or more for large models
CPU: A decent quad-core CPU or better
GPU: A high-end GPU with at least 8GB of VRAM
Storage: A fast SSD drive
In addition to these hardware requirements, you will also need a powerful operating system, such as Linux or Windows Server.

Here are some specific examples of hardware that can be used to run LLMs:

Desktop computer: A standard desktop computer with 16GB of RAM and a decent CPU can be used to run smaller LLM models.

Server: A server with 128GB of RAM or more and a high-end GPU can be used to run larger LLM models.

Cloud computing: Cloud computing services, such as Google Cloud Platform or Amazon Web Services, can be used to run LLM models on powerful hardware.

The cost of running an LLM will vary depending on the size of the model and the hardware that is used. For smaller models, the cost can be relatively low. However, for larger models, the cost can be quite high.

* * *

(Me)  Provide some examples of results that can be obtained from a Large Language Model running on a desktop computer.


(Bard)  Sure, here are some examples of results that can be obtained from a Large Language Model running on a desktop computer:

Text generation: The model can be used to generate text, such as poems, code, scripts, musical pieces, email, letters, etc. The text generated by the model will be similar to human-written text, and it can be used for a variety of purposes, such as creative writing, marketing, and customer service.

Translation: The model can be used to translate text from one language to another. The translations produced by the model will be more accurate than those produced by traditional machine translation systems, and they will be more natural-sounding.

Question answering: The model can be used to answer questions about a wide range of topics. The model can access and process information from the real world through Google Search, and it can use this information to answer your questions in a comprehensive and informative way.

Summarization: The model can be used to summarize text. The summaries produced by the model will be concise and informative, and they will capture the main points of the original text.

Creative writing: The model can be used to generate creative text, such as poems, stories, and scripts. The text generated by the model will be original and engaging, and it can be used for a variety of purposes, such as entertainment, education, and marketing.

These are just a few examples of the results that can be obtained from a Large Language Model running on a desktop computer. As these models continue to develop, we can expect to see even more impressive and useful results in the future.

* * *

(Me)  What is the minimum Internet connection speed needed for the effective use of an LLM on a desktop computer?


(Bard)  The minimum internet connection speed needed for the effective use of an LLM on a desktop computer depends on the specific LLM and the tasks you want to use it for. However, a good rule of thumb is to have an internet speed of at least 25 Mbps. This will allow you to run most LLMs without any major problems. If you want to use an LLM for more demanding tasks, such as generating realistic text or translating languages, you may need an internet speed of 50 Mbps or more.

Here are some additional tips for getting the best performance from your LLM:

Use a wired internet connection instead of Wi-Fi. This will give you a more stable and consistent internet connection.

Close any other programs that you are not using. This will free up resources on your computer and improve the performance of your LLM.

Use a high-quality LLM. Some LLMs are more demanding than others, so it is important to choose one that is well-suited for your needs.

I hope this helps!

* * *

(Me)  Estimate the costs of running an LLM on a desktop system including internet connection services.


(Bard)  Sure, here are some estimates for the costs of running an LLM on a desktop system, including internet connection services:

Hardware: The cost of the hardware will depend on the specific LLM you want to run and the specifications of your desktop system. However, you can expect to spend at least $500 for a decent GPU and $1,000 for a high-end GPU. 

Electricity: The cost of electricity will depend on your local electricity rates and how much you use your desktop system. However, you can expect to pay an additional $10-20 per month for electricity if you use your desktop system for several hours per day.

Internet connection: The cost of your internet connection will depend on your ISP and the speed of your connection. However, you can expect to pay an additional $10-20 per month for a high-speed internet connection.


Total monthly cost:

Low-end: $70-100
Mid-range: $100-150
High-end: $150-200

It's important to note that these are just estimates, and the actual costs may vary depending on your specific circumstances. For example, if you live in an area with high electricity rates, your electricity costs will be higher. Similarly, if you use your desktop system for a lot of other tasks, your electricity costs will be higher.

Overall, the costs of running an LLM on a desktop system can be significant. However, if you need to use the LLM for a lot of tasks, it may be worth the investment.

No comments: