1.0.0 (2024-03-23)
Bug Fixes
- Change organisation into teams (#132) (5c23bb1)
- A better landing page (#23) (82e7e67)
- Ability to call external API's (#45) (1840315)
- Add Airbyte docs (5b989d2)
- Add certs (645395a)
- Add new images (5ae65bb)
- Add RabbitMQ connector (#386) (b7922fc)
- Add step (329d378)
- Add step attribute to temperature form field (#93) (2a639a1)
- Add testing data (cc264b3)
- Airbyte to Dataset (#387) (b1aae89)
- Allow configuration of chunking (#48) (6675ce4)
- Allow multiple document upload (825b897)
- Allow user to set trim ratio (#159) (8c72480)
- Another attempt to fix tests. (de4cb2b)
- API to create prompts based on dataset. (#40) (e0a47a2)
- Attach Keycloak to Postgres (#414) (56bda68)
- Bare metal (#409) (e4424e3)
- Build keycloak container (95188ea)
- Build keycloak container (c48d30f)
- Caps for SSE (bc2ac69)
- Cascade deletes to chunks from datasets (#394) (3f2df99)
- Change name (fd2b4af)
- Change ownership (b78b4ae)
- Check authz for API keys. (0878cbc)
- Configurable logout utl (#283) (e695e4f)
- Configure access to the chunking engine (bcc74a8)
- Connect prompts to the models. Use the prompts in the console. (#18) (5a6e9be)
- Context size of Llama 2 7B is 2K (8781032)
- Create the operator (#300) (197754a)
- Default embeddings model (#81) (d28523c)
- Deploy Oauth2 Proxy and Envoy. (#309) (9cbd984)
- Document deployment (#140) (4fbf091)
- Don't send max_tokens (591f9cb)
- Don't split streaming JSON. (#361) (d63f2ec)
- Drawer background (2f572c9)
- Encapsulate models (#331) (75d21d6)
- Export video as an asset (7abb9fa)
- Fix migrations (#22) (cc9dff8)
- Fix tests (bc4bf47)
- Fix tests (201b2f8)
- Generate all secrets from the operator (#403) (f5575c4)
- Generate markdown on the fly (#64) (09d9963)
- Get integration testing into the CI/CD pipeline (#85) (3a3eccb)
- Get the API working (#96) (b5cfe0c)
- GPU setting for operator (#400) (303499b)
- Guide users towards kubernetes (#381) (ffb9207)
- Increase upload size and add docs (6ab06ec)
- Install Models K8's (#337) (d36d96d)
- Installation documentation (#26) (91ba404)
- Integration testing (#269) (cbb62e1)
- Integration Testing (#276) (1aeba14)
- K3s running gen AI at the edge. (#405) (cd93444)
- K8's Operator In Progress (#299) (41b04b6)
- K8s Operator (#294) (f72addb)
- K8s operator (#305) (96c1af3)
- Keycloak on /oidc (e2027b3)
- Kubernetes (#412) (9f31ce7)
- Llama 7b chat (#72) (b3e768f)
- Load the correct model into prompt (4d87e5f)
- Load webpki roots (4bcc9cd)
- Make pipeline job more robust (#151) (b42fbdb)
- Max upload size and API auth. (#180) (1567539)
- Migrations (b7c0392)
- Migrations should run without role permissions (#150) (70f5814)
- More robust handling of newlines for chat completion LLM API (#30) (8c58487)
- New embeddings end point (0ad5b09)
- ollama docker-compose (#169) (f802d00)
- Only set role if we have permission (80465f9)
- Patch openai (94bb1c0)
- Patch openai node package to work with TGI (#384) (92e4884)
- pgAdmin and GPU (#402) (909ec4f)
- PgAdmin via ingress (#413) (233dbfa)
- Pipeline job in Kubernetes (#312) (973e880)
- Point to correct pod (f3a90e3)
- Pre-load models into embeddings engine (#335) (af7c496)
- Put keycloak behind a proxy (15ce795)
- Refactor pages and remove code. (#115) (3fd800d)
- Refresh the page during document upload (#379) (1a636c9)
- Remove keycloak from operator (#362) (5bef780)
- Return Auth error and add API to integration tests (#311) (58c6663)
- Run tests after build (9e27418)
- Security fix (#398) (2ec197f)
- Set correct port in K8's (08c186f)
- Set to cluster IP (d47c328)
- Setup RBAC permissions (#198) (d2eb460)
- Show top usage users (#129) (3c12f45)
- Start to integrate key cloak in kuberenetes (#306) (54442f1)
- Switch to tailwind for utilities (#110) (2f8c8c2)
- System Administrator (#189) (7dde660)
- Team access (#284) (9800ac0)
- tests (f7d1a5a)
- Tighten up on context size (#156) (2a4cdc3)
- Try build (7bed460)
- Try on path /oidc (52c5d7b)
- Ui ux misc (#56) (3f6e33b)
- UI/UX issues with the console (#20) (846e9f6)
- Update PgAdmin (#411) (37b30f3)
- Update version (94c164e)
- Update version (9f03b98)
- Update versions (02af031)
- Upgrade Local AI. (#33) (de479ce)
- Use /chat/completions (#59) (f7b2222)
- Use correct table name (db6b890)
- Use correct version of inference engine (7bebdd1)
- Use cria as llama backend (67de6e0)
- Use cria for completions (#67) (54a33d0)
- Use embeddings model attached to dataset. (#378) (b36edc1)
- Use github token (226eccc)
- Use github token (fa0fce2)
- Use https when required (#99) (24e1e9e)
- Use llama-2-7b-chat as the model. (#60) (ba9ef94)
- Use OpenAI API node library (#368) (2f092c4)
- Use the correct model address (77a3b1d)
- Use the new chat API (#49) (15b1007)
- Various gui issues (#179) (2430b46)
- Versions (0b46027)
- We need the patches (cc079b1)