[go: nahoru, domu]

TiDB (/’taɪdiːbi:/, "Ti" stands for Titanium) is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads.[3] Designed to be MySQL compatible, it is developed and supported primarily by PingCAP and licensed under Apache 2.0. It is also available as a paid product. TiDB drew its initial design inspiration from Google's Spanner and F1 papers.[4][5][6]

TiDB
Developer(s)PingCAP Inc.
Initial releaseOctober 15, 2017; 7 years ago (2017-10-15)[1]
Stable release
8.3.0[2] Edit this on Wikidata / 22 August 2024; 2 months ago (22 August 2024)
Repository
Written inGo (TiDB), Rust (TiKV)
Available inEnglish, Chinese
TypeNewSQL
LicenseApache 2.0
Websiteen.pingcap.com/tidb/ Edit this on Wikidata

Release history

edit

See all TiDB release notes.

Main features

edit

Horizontal scalability

edit

TiDB can expand both SQL processing and storage capacity by adding new nodes.

MySQL compatibility

edit

TiDB acts like it is a MySQL 8.0 server to applications. A user can continue to use all of the existing MySQL client libraries.[7] Because TiDB's SQL processing layer is built from scratch, not a MySQL fork.[8]

Distributed transactions with strong consistency

edit

TiDB internally shards a table into small range-based chunks that are referred to as "Regions".[9] Each Region defaults to approximately 100 MB in size, and TiDB uses a two-phase commit internally to ensure that regions are maintained in a transactionally consistent way.

Cloud native

edit

TiDB is designed to work in the cloud. The storage layer of TiDB, called TiKV, became a Cloud Native Computing Foundation (CNCF) member project in August 2018, as a Sandbox level project,[10] and became an incubation-level hosted project in May 2019.[11] TiKV graduated from CNCF in September 2020.[12]

Real-time HTAP

edit

TiDB can support both online transaction processing (OLTP) and online analytical processing (OLAP) workloads. TiDB has two storage engines: TiKV, a rowstore, and TiFlash, a columnstore.

High availability

edit

TiDB uses the Raft consensus algorithm[13] to ensure that data is available and replicated throughout storage in Raft groups. In the event of failure, a Raft group will automatically elect a new leader for the failed member, and self-heal the TiDB cluster.

Deployment methods

edit

Kubernetes with Operator

edit

TiDB can be deployed in a Kubernetes-enabled cloud environment by using TiDB Operator.[14] An Operator is a method of packaging, deploying, and managing a Kubernetes application. It is designed for running stateful workloads and was first introduced by CoreOS in 2016.[15] TiDB Operator[16] was originally developed by PingCAP and open-sourced in August, 2018.[17] TiDB Operator can be used to deploy TiDB on a laptop,[18] Google Cloud Platform’s Google Kubernetes Engine,[19] and Amazon Web Services’ Elastic Container Service for Kubernetes.[20]

TiUP

edit

TiDB 4.0 introduces TiUP, a cluster operation and maintenance tool. It helps users quickly install and configure a TiDB cluster with a few commands.[21]

TiDB Ansible

edit

TiDB can be deployed using Ansible by using a TiDB Ansible playbook (not recommended).[22]

Docker

edit

Docker can be used to deploy TiDB in a containerized environment on multiple nodes and multiple machines, and Docker Compose can be used to deploy TiDB with a single command for testing purposes.[23]

Tools

edit

TiDB has a series of open-source tools built around it to help with data replication and migration for existing MySQL and MariaDB users.

TiDB Data Migration (DM)

edit

TiDB Data Migration (DM) is suited for replicating data from already sharded MySQL or MariaDB tables to TiDB.[24] A common use case of DM is to connect MySQL or MariaDB tables to TiDB, treating TiDB almost as a slave, then directly run analytical workloads on this TiDB cluster in near real-time.

Backup & Restore

edit

Backup & Restore (BR) is a distributed backup and restore tool for TiDB cluster data.[25]

Dumpling

edit

Dumpling is a data export tool that exports data stored in TiDB or MySQL. It lets users make logical full backups or full dumps from TiDB or MySQL.[26]

TiDB Lightning

edit

TiDB Lightning is a tool that supports high speed full-import of a large MySQL dump into a new TiDB cluster. This tool is used to populate an initially empty TiDB cluster with much data, in order to speed up testing or production migration. The import speed improvement is achieved by parsing SQL statements into key-value pairs, then directly generate Sorted String Table (SST) files to RocksDB.[27][28]

TiCDC

edit

TiCDC is a Change data capture tool which streams data from TiDB to other systems like Apache Kafka.

TiDB Binlog

edit

TiDB Binlog is a tool used to collect the logical changes made to a TiDB cluster. It is used to provide incremental backup and replication, either between two TiDB clusters, or from a TiDB cluster to another downstream platform.[29][30]

See also

edit

References

edit
  1. ^ "1.0 GA release notes". GitHub.
  2. ^ "Release 8.3.0". August 22, 2024. Retrieved August 27, 2024.
  3. ^ Xu, Kevin (October 17, 2018). "How TiDB combines OLTP and OLAP in a distributed database". InfoWorld.
  4. ^ "F1: A Distributed SQL Database That Scales". 2013.
  5. ^ "Spanner: Google's Globally-Distributed Database". 2012.
  6. ^ Hall, Susan (April 17, 2017). "TiDB Brings Distributed Scalability to SQL". The New Stack.
  7. ^ Tocker, Morgan (November 14, 2018). "Meet TiDB: An open source NewSQL database". Opensource.com.
  8. ^ "Compatibility with MySQL". PingCAP.
  9. ^ "TiKV Architecture". TiKV.
  10. ^ Evans, Kristen (August 28, 2018). "CNCF to Host TiKV in the Sandbox". Cloud Native Computing Foundation.
  11. ^ CNCF (May 21, 2019). "TOC Votes to Move TiKV into CNCF Incubator". Cloud Native Computing Foundation. Retrieved August 19, 2020.
  12. ^ TiKV Authors (September 2, 2020). "Celebrating TiKV's CNCF Graduation". TiKV.
  13. ^ "The Raft Consensus Algorithm".
  14. ^ Jackson, Joab (January 22, 2019). "Database Operators Bring Stateful Workloads to Kubernetes". The New Stack.
  15. ^ Philips, Brandon (November 3, 2016). "Introducing Operators: Putting Operational Knowledge into Software". CoreOS.
  16. ^ "TiDB Operator GitHub repo". GitHub.
  17. ^ "Introducing the Kubernetes Operator for TiDB". InfoWorld. August 16, 2018.
  18. ^ "Deploy TiDB to Kubernetes on Your Laptop".
  19. ^ "Deploy TiDB, a distributed MySQL compatible database, to Kubernetes on Google Cloud".
  20. ^ "Deploy TiDB, a distributed MySQL compatible database, on Kubernetes via AWS EKS". GitHub.
  21. ^ Long, Heng (April 19, 2020). "Get a TiDB Cluster Up in Only One Minute". PinCAP. Retrieved August 19, 2020.
  22. ^ "Ansible Playbook for TiDB". GitHub.
  23. ^ "How to Spin Up an HTAP Database in 5 Minutes With TiDB + TiSpark".
  24. ^ "DM GitHub Repo". GitHub.
  25. ^ Shen, Taining (April 13, 2020). "How to Back Up and Restore a 10-TB Cluster at 1+ GB/s". PingCAP.
  26. ^ "Dumpling Overview". PingCAP.
  27. ^ Chan, Kenny (January 30, 2019). "Introducing TiDB Lightning". PingCAP.
  28. ^ "TiDB Lightning Overview". PingCAP.
  29. ^ "TiDB Binlog Cluster Overview". PingCAP.
  30. ^ Wang, Xiang (January 29, 2019). "TiDB-Binlog Architecture Evolution and Implementation Principles". PingCAP.