PC & IT SUPPORT MADE EASY FORUM
Would you like to react to this message? Create an account in a few clicks or log in to continue.

ChatGPT on a Raspberry Pi Locally

Go down

ChatGPT on a Raspberry Pi Locally Empty ChatGPT on a Raspberry Pi Locally

Post by jamied_uk 11th January 2024, 17:17



youtube.com/watch?v=N0718RfpuWE


ChatGPT on a Raspberry Pi Locally
 
https://jnet.forumotion.com/t1994-chatgpt-on-a-raspberry-pi-locally



https://www.youtube.com/watch?v=N0718RfpuWE


https://github.com/antimatter15/alpaca.cpp

https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/tree/main?clone=true

https://github.com/antimatter15/alpaca.cpp/releases/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Here are the instructions to run a ChatGPT-like model locally on your device:

1. Download the zip file corresponding to your operating system from the latest release https://github.com/antimatter15/alpaca.cpp/releases/

On Windows, download `alpaca-win.zip`, on Mac (both Intel or ARM) download `alpaca-mac.zip`, and on Linux (x64) download `alpaca-linux.zip`.
2. Download ggml-alpaca-7b-q4.bin(https://huggingface.co/Sosaka/Alpaca-...) and place it in the same folder as the `chat` executable in the zip file.
3. Once you've downloaded the model weights and placed them into the same directory as the `chat` or `chat.exe` executable, run `./chat` in the terminal (for MacOS and Linux) or `.\\Release\\chat.exe` (for Windows).
4. You can now type to the AI in the terminal and it will reply.

If you prefer building from source, follow these instructions:

For MacOS and Linux:

1. Clone the repository using `git clone https://github.com/antimatter15/alpac...`
2. Navigate to the cloned repository using `cd alpaca.cpp`.
3. Run `make chat`.
4. Run `./chat` in the terminal.

1. Download the weights via any of the links in "Get started" above, and save the file as `ggml-alpaca-7b-q4.bin` in the main Alpaca directory.
2. In the terminal window, run `.\\Release\\chat.exe`.
3. You can now type to the AI in the terminal and it will reply.

Thats ait as long as your downloaded model is in the same location as the chat file your ready


More Guide Notes And Alternative Methods!
~~~~~~~~~~~~~~~~~~~~~

https://www.youtube.com/watch?v=N0718RfpuWE&t=9s

I Ran ChatGPT on a Raspberry Pi Locally!

My RPI AI IP 
192.168.2.40


sudo apt install -y build-essential




GitHub Repo:

git clone https://github.com/antimatter15/alpaca.cpp.git

cd alpaca.cpp
make chat


Model Weights: https://huggingface.co/Sosaka/Alpaca-...

download ggml-alpaca-7b-q4.bin

(put in same location as chat in .cpp folder)

./chat

Here are the instructions to run a ChatGPT-like model locally on your device:

1. Download the zip file corresponding to your operating system from the latest release 
(https://github.com/antimatter15/alpac.... On Windows,
 download `alpaca-win.zip`, on Mac (both Intel or ARM)
 download `alpaca-mac.zip`, and on Linux (x64) download `alpaca-linux.zip`.
2. Download ggml-alpaca-7b-q4.bin(https://huggingface.co/Sosaka/Alpaca-...) 

and place it in the same folder as the `chat` executable in the zip file.
3. Once you've downloaded the model weights and placed them into the same directory as the `chat` or `chat.exe` executable, run `./chat` in the terminal (for MacOS and Linux) or `.\\Release\\chat.exe` (for Windows).
4. You can now type to the AI in the terminal and it will reply.

If you prefer building from source, follow these instructions:

For MacOS and Linux:

1. Clone the repository using `git clone https://github.com/antimatter15/alpac...`
2. Navigate to the cloned repository using `cd alpaca.cpp`.
3. Run `make chat`.
4. Run `./chat` in the terminal.

1. Download the weights via any of the links in "Get started" above, and save the file as `ggml-alpaca-7b-q4.bin` in the main Alpaca directory.
2. In the terminal window, run `.\\Release\\chat.exe`.
3. You can now type to the AI in the terminal and it will reply.


As part of Meta’s commitment to open science, today we are publicly releasing LLaMA (Large Language Model Meta AI), a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.

Over the last year, large language models — natural language processing (NLP) systems with billions of parameters — have shown new capabilities to generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more. They are one of the clearest cases of the substantial potential benefits AI can offer at scale to billions of people.

Smaller models trained on more tokens — which are pieces of words — are easier to retrain and fine-tune for specific potential product use cases. We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens.

Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text. To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets.

There is still more research that needs to be done to address the risks of bias, toxic comments, and hallucinations in large language models. Like other models, LLaMA shares these challenges. As a foundation model, LLaMA is designed to be versatile and can be applied to many different use cases, versus a fine-tuned model that is designed for a specific task. By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems in large language models. We also provide in the paper a set of evaluations on benchmarks evaluating model biases and toxicity to show the model’s limitations and to support further research in this crucial area.

To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world. People interested in applying for access can find the link to the application in our research paper.

We believe that the entire AI community — academic researchers, civil society, policymakers, and industry — must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular. We look forward to seeing what the community can learn — and eventually build — using LLaMA.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Another simular method to do the same thing

How to Install Alpaca on Raspberry Pi and Turn Your Raspberry Pi into an AI ChatBot server?
In this step-by-step section, we will guide you through installing and running Alpaca, an instruction-following language model similar to ChatGPT, on the latest Raspberry Pi. By the end, your Pi will be transformed into a private AI chatbot server you can interact with offline, without sending queries to the cloud.

Prerequisites
Before we get started with the installation, let’s go over what you’ll need:

Hardware
A Raspberry Pi 5 board with the latest Raspbian OS image installed. Any model with 4GB+ RAM should work.
A microSD card with at least 32GB storage for the OS and model weights.
A power supply and micro HDMI cable for the Pi.
A heat sink for managing thermals.
Software
Raspbian with desktop and recommended software
Git and CMake for installing Alpaca
An SSH client to connect remotely
As long as you cover these basics, you are good to go! Feel free to use an older Pi if you have one lying around (Make sure your Pi has at least 4GB RAM.)- Alpaca supports ARM architectures like the Pi’s, though you may need to adjust model size based on available memory.

With the gear ready, let’s get to the fun part – installing and interacting with Alpaca!

Step 1 – Boot Up the Raspberry Pi
Insert your flashed microSD card into the Pi, connect peripherals like the keyboard, mouse and monitor, and power it on to boot into the Raspbian desktop.

Once loaded, connect to the internet if WiFi credentials are saved. Otherwise, configure your wireless network from Preferences -> WiFi Configuration.

Next, we’ll enable SSH so we can work remotely if needed. Go to Preferences -> Raspberry Pi Configuration -> Interfaces and toggle SSH to on.

Finally, click OK and let the Pi restart before moving on.

Boot Up the Raspberry Pi
Step 2 – Install Dependencies
Now we need to install developer tools like Git and CMake which are required for building software from source code:

sudo apt update
sudo apt install git cmake build-essential libssl-dev -y
This will refresh package indexes and install the tools using apt.

Install Dependencies
Step 3 – Download and Build Alpaca
With dependencies handled, it’s time to download Alpaca. We’ll clone the repo from GitHub using git:

git clone https://github.com/antimatter15/alpaca.cpp
This creates a local copy at ~/alpaca.cpp. Change into this new directory:

cd alpaca.cpp
And build the chat binary using Make:

make chat
You should now have an executable chat program compiled specifically for your Pi’s ARM architecture.

Download and Build Alpaca
Step 4 – Download the LLM Module
While the chat tool is created, it doesn’t have anything to talk about yet! We need to download compatible model weights for it to load on startup.

The 7B weight file can be found here:

wget https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/resolve/main/ggml-alpaca-7b-q4.bin?download=true
You should rename the downloaded file to ggml-alpaca-7b-q4.bin

Download the LLM Module
Step 5 – Start The AI Chatbot
Fire up your private Alpaca server simply by running:

./chat
You should see a loading screen as it initializes the model, tokenizes vocabulary, sets up pointers etc. Within 30 seconds, you’ll be greeted with the Alpaca prompt!

See Also  Step by Step Procedure to Detect the Microsoft Exchange 0 Day Exploit.
Go ahead and say hello, or ask any question that comes to mind!

To exit the chat, press Ctrl + C. You can restart it again by running ./chat whenever you like without needing to repeat the full installation process.

Screenshot 2023-12-18 13-47-52
Some example queries to try:

What is reverse shell?

How to get a reverse shell of a Windows PC?

Give Alpaca a test drive locally before exposing it more widely on your network. Feel free to tweak chat parameters like the number of inference threads if you want to play with performance.

And that’s it! By completing these steps, you now have your very own AI chatbot powered by a ChatGPT-style language model running on the Raspberry Pi, controllable entirely offline for private use.


Last edited by jamied_uk on 20th January 2024, 20:24; edited 1 time in total
jamied_uk
jamied_uk
Admin

Posts : 2952
Join date : 2010-05-09
Age : 41
Location : UK

https://jnet.sytes.net

Back to top Go down

ChatGPT on a Raspberry Pi Locally Empty Re: ChatGPT on a Raspberry Pi Locally

Post by jamied_uk 19th January 2024, 15:18

My Custom Bash RC File For Raspian OS that includes voice and exec function called x


Code:
sudo apt install -y figlet libttspico-utils gedit


Code:
sudo gedit .bashrc


Code:

# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
year="2024"
# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;
esac

# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth

# append to the history file, don't overwrite it
shopt -s histappend

# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000

# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize

# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar

# make less more friendly for non-text input files, see lesspipe(1)
#[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
    debian_chroot=$(cat /etc/debian_chroot)
fi

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    xterm-color|*-256color) color_prompt=yes;;
esac

# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes

if [ -n "$force_color_prompt" ]; then
    if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
   # We have color support; assume it's compliant with Ecma-48
   # (ISO/IEC-6429). (Lack of such support is extremely rare, and such
   # a case would tend to support setf rather than setaf.)
   color_prompt=yes
    else
   color_prompt=
    fi
fi

if [ "$color_prompt" = yes ]; then
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w \$\[\033[00m\] '
else
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt

# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
    PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
    ;;
*)
    ;;
esac

# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
    test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
    alias ls='ls --color=auto'
    #alias dir='dir --color=auto'
    #alias vdir='vdir --color=auto'

    alias grep='grep --color=auto'
    alias fgrep='fgrep --color=auto'
    alias egrep='egrep --color=auto'
fi

function x(){
sudo chmod +x $1
}

function speak(){
~/speak.sh $1
}
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'

# some more ls aliases
#alias ll='ls -l'
#alias la='ls -A'
#alias l='ls -CF'

# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.

if [ -f ~/.bash_aliases ]; then
    . ~/.bash_aliases
fi

# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi
fi

echo "JNET (C) $year" | figlet

jamied_uk
jamied_uk
Admin

Posts : 2952
Join date : 2010-05-09
Age : 41
Location : UK

https://jnet.sytes.net

Back to top Go down

ChatGPT on a Raspberry Pi Locally Empty Re: ChatGPT on a Raspberry Pi Locally

Post by jamied_uk 19th January 2024, 15:34

tspeak script


Code:
#!/bin/bash
file_to_play="/home/$USER/Documents/AI-RPI/Past_Chats.txt"
~/speak.sh "$(cat $file_to_play)"



actual speak script  (the 1st relies on this following script to work)
speak.sh


Code:
#!/bin/bash
pico2wave -w=/tmp/test.wav "$1"
aplay /tmp/test.wav
rm /tmp/test.wav


Another Bonus Example
Code:
#!/bin/bash
file_to_play="/home/$USER/Documents/AI-RPI/alpaca.cpp/pirate_story.txt"
echo "Playing $file_to_play "
cat $file_to_play
~/speak.sh "$(cat $file_to_play)"


jamied_uk
jamied_uk
Admin

Posts : 2952
Join date : 2010-05-09
Age : 41
Location : UK

https://jnet.sytes.net

Back to top Go down

ChatGPT on a Raspberry Pi Locally Empty Re: ChatGPT on a Raspberry Pi Locally

Post by Sponsored content


Sponsored content


Back to top Go down

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum