This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-04-19 21:16:06 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
gguf-py
/
gguf
History
Galunid
36eed0c42c
stablelm : StableLM support (
#3586
)
...
* Add support for stablelm-3b-4e1t * Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
stablelm : StableLM support (
#3586
)
2023-11-14 11:17:12 +01:00
gguf_reader.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
gguf_writer.py
gguf-py: gguf_writer: Use bytearray to build metadata (
#4051
)
2023-11-12 16:39:37 -07:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (
#2842
)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
vocab.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00