Thu, 16 Mar 2023
I have been using GPT-3, and GPT-4 just came out. Finding this interesting, I downloaded LLaMA. I have LLaMA on my desktop. There is a gradio, Automatic1111-like text generation web UI that lets large language models be targets. Normal LLaMA does not work for my desktop's Nvidia RTX 3060 with 12G VRAM, so I got 4 bit models, which may be substandard, but what can I do? LLaMA comes in different parameter sizes: 7B, 13B, 30B and 65B. My Nvidia card can handle 13B parameters, but my Nvidia card conks out when trying to do the 30B parameter one. The model is 25 gigs large. Reminds me of how Marvin Minsky used to say that when AI is really figured out, it will be able to be run on a typical home PC.
So with the text generation web UI I am now talking with LLaMA, which is more interesting than talking to Eliza.
OK, finally got him. Saul Bass designed the movie poster for Vertigo (Bill Gold did design the posters for other Hitchcock movies, like "The Wrong Man" though). What does ChatGPT say (I am pointed to GPT-3, not paying for GPT-4 currently).Correct! ChatGPT wins my no-prize prize for that question. Of course, ChatGPT has 175B parameters to the 13B 4-bit LLaMA that I am running on my desktop.