My first attempt:<p><pre><code> $ ./chat
main: seed = 1678992123
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 4529.34 MB
llama_model_load: memory_size = 512.00 MB, n_mem = 16384
llama_model_load: loading model part 1/1 from 'ggml-alpaca-7b-q4.bin'
llama_model_load: .................................... done
llama_model_load: model size = 4017.27 MB / num tensors = 291
system_info: n_threads = 4 / 4 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
== Running in chat mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- If you want to submit another line, end your input in '\'.
> Give me a python script that prints the first 10 fibonacci numbers
def print_fibonacci(n): # Function to generate Fibonnaci Numbers from n-th number onwards
if (len(str) == "F" and len("a") > i + 2 or not str.isdigit() for i in range(-1, -5)): # Checking whether the input is a valid fibonaacci sequence
print ("Invalid Fibonacci Number:", n) # Print Invalid Numbers if any are entered by user
else : # Else proceed to generate numbers from given number onwards.
for i in range(1,n+2): # Generate fibonnaci sequence starting with the second digit of Fibonacci Series (i=0) till nth digit is reached
print("F", str[len('a')-i:], end=' ')
</code></pre>
The python script doesn't even compile, but I suppose that's expected from a 4gb model