[go: nahoru, domu]

Jump to content

Editing B-tree

You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to a username, among other benefits.
Content that violates any copyrights will be deleted. Encyclopedic content must be verifiable through citations to reliable sources.
Latest revision Your text
Line 22: Line 22:


Bayer and McCreight never explained what, if anything, the ''B'' stands for; ''Boeing'', ''balanced'', ''between'', ''broad'', ''bushy'', and ''Bayer'' have been suggested.{{sfn|Comer|1979|p=123 footnote 1}}<ref name=herrenalb/><ref>{{cite web |url=http://scpd.stanford.edu/knuth/index.jsp |title=Stanford Center for Professional Development |website=scpd.stanford.edu |access-date=2011-01-16 |archive-url=https://web.archive.org/web/20140604193847/http://scpd.stanford.edu/knuth/index.jsp |archive-date=2014-06-04 |url-status=dead}}</ref> McCreight has said that "the more you think about what the B in B-trees means, the better you understand B-trees".<ref name=herrenalb>{{cite web |url=https://vimeo.com/73481096 |title=4- Edward M McCreight |first=Peter G. |last=Weiner |date=30 August 2013 |via=Vimeo}}</ref>
Bayer and McCreight never explained what, if anything, the ''B'' stands for; ''Boeing'', ''balanced'', ''between'', ''broad'', ''bushy'', and ''Bayer'' have been suggested.{{sfn|Comer|1979|p=123 footnote 1}}<ref name=herrenalb/><ref>{{cite web |url=http://scpd.stanford.edu/knuth/index.jsp |title=Stanford Center for Professional Development |website=scpd.stanford.edu |access-date=2011-01-16 |archive-url=https://web.archive.org/web/20140604193847/http://scpd.stanford.edu/knuth/index.jsp |archive-date=2014-06-04 |url-status=dead}}</ref> McCreight has said that "the more you think about what the B in B-trees means, the better you understand B-trees".<ref name=herrenalb>{{cite web |url=https://vimeo.com/73481096 |title=4- Edward M McCreight |first=Peter G. |last=Weiner |date=30 August 2013 |via=Vimeo}}</ref>

In 2011 [[Google]] developed the C++ B-Tree, reporting a 50–80% reduction in memory use for small data types and improved performance for large data sets when compared to a [[red-black tree]].<ref>{{cite web |url=https://opensource.googleblog.com/2013/01/c-containers-that-save-memory-and-time.html |title=C++ containers that save memory and time}}</ref>


==Definition==
==Definition==
Line 27: Line 29:


# Every node has at most ''m'' children.
# Every node has at most ''m'' children.
# Every node, except for the root and the leaves, has at least ⌈''m''/2⌉ children.
# Every internal node has at least ⌈''m''/2⌉ children.
# The root node has at least two children unless it is a leaf.
# The root node has at least two children unless it is a leaf.
# All leaves appear on the same level.
# All leaves appear on the same level.
Line 49: Line 51:
The literature on B-trees is not uniform in its terminology.{{sfn|Folk|Zoellick|1992|p=362}}
The literature on B-trees is not uniform in its terminology.{{sfn|Folk|Zoellick|1992|p=362}}


Bayer and McCreight (1972),{{sfn|Bayer|McCreight|1972}} Comer (1979),{{sfn|Comer|1979}} and others define the '''order''' of B-tree as the minimum number of keys in a non-root node. Folk and Zoellick {{sfn|Folk|Zoellick|1992|p=363}} points out that terminology is ambiguous because the maximum number of keys is not clear. An order 3 B-tree might hold a maximum of 6 keys or a maximum of 7 keys. Knuth (1998) avoids the problem by defining the '''order''' to be the maximum number of children (which is one more than the maximum number of keys).{{sfn|Knuth|1998|p=483}}
Bayer and McCreight (1972),{{sfn|Bayer|McCreight|1972}} Comer (1979),{{sfn|Comer|1979}} and others define the '''order''' of B-tree as the minimum number of keys in a non-root node. Folk and Zoellick {{sfn|Folk|Zoellick|1992}} points out that terminology is ambiguous because the maximum number of keys is not clear. An order 3 B-tree might hold a maximum of 6 keys or a maximum of 7 keys. Knuth (1998) avoids the problem by defining the '''order''' to be the maximum number of children (which is one more than the maximum number of keys).{{sfn|Knuth|1998|p=483}}


The term '''leaf''' is also inconsistent. Bayer and McCreight (1972){{sfn|Bayer|McCreight|1972}} considered the leaf level to be the lowest level of keys, but Knuth considered the leaf level to be one level below the lowest keys.{{sfn|Folk|Zoellick|1992|p=363}} There are many possible implementation choices. In some designs, the leaves may hold the entire data record; in other designs, the leaves may only hold pointers to the data record. Those choices are not fundamental to the idea of a B-tree.<ref>{{Harvtxt|Bayer|McCreight|1972}} avoided the issue by saying an index element is a (physically adjacent) pair of (''x'',&nbsp;''a'') where ''x'' is the key, and ''a'' is some associated information. The associated information might be a pointer to a record or records in a random access, but what it was didn't really matter. {{Harvtxt|Bayer|McCreight|1972}} states, "For this paper the associated information is of no further interest."</ref>
The term '''leaf''' is also inconsistent. Bayer and McCreight (1972){{sfn|Bayer|McCreight|1972}} considered the leaf level to be the lowest level of keys, but Knuth considered the leaf level to be one level below the lowest keys.{{sfn|Folk|Zoellick|1992|p=363}} There are many possible implementation choices. In some designs, the leaves may hold the entire data record; in other designs, the leaves may only hold pointers to the data record. Those choices are not fundamental to the idea of a B-tree.<ref>{{Harvtxt|Bayer|McCreight|1972}} avoided the issue by saying an index element is a (physically adjacent) pair of (''x'',&nbsp;''a'') where ''x'' is the key, and ''a'' is some associated information. The associated information might be a pointer to a record or records in a random access, but what it was didn't really matter. {{Harvtxt|Bayer|McCreight|1972}} states, "For this paper the associated information is of no further interest."</ref>
Line 74: Line 76:


Each internal node in a B-tree has the following format:
Each internal node in a B-tree has the following format:
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
{| class="wikitable"
|+Internal node structure
|+Internal Node Structure
!pt<sub>0</sub>
!pt<sub>0</sub>
!k<sub>0</sub>
!k<sub>0</sub>
Line 88: Line 90:
!pr<sub>K-1</sub>
!pr<sub>K-1</sub>
|}
|}
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
{| class="wikitable"
|+Internal node pointer structure
|+Internal Node Pointer Structure
!<math> pt _0</math> when <math>k_0</math> exists
!<math> pt _0</math> when <math>k_0</math> exists
!<math>pt_i</math> when <math>k_{i-1}</math> and <math>k_i</math> exist
!<math>pt_i</math> when <math>k_{i-1}</math> and <math>k_i</math> exist
Line 105: Line 107:
|}
|}
Each leaf node in a B-tree has the following format:
Each leaf node in a B-tree has the following format:
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
{| class="wikitable"
|+Leaf node structure
|+Leaf Node Structure
!pr<sub>0</sub>
!pr<sub>0</sub>
!k<sub>0</sub>
!k<sub>0</sub>
Line 115: Line 117:
!k<sub>K-1</sub>
!k<sub>K-1</sub>
|}
|}
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
{| class="wikitable"
|+Leaf node pointer structure
|+Leaf Node Pointer Structure
!<math>pr_i</math> when <math>k_i</math> exists
!<math>pr_i</math> when <math>k_i</math> exists
!<math>pr_i</math> when <math>k_i</math> does not exist
!<math>pr_i</math> when <math>k_i</math> does not exist
Line 124: Line 126:
|}
|}
The node bounds are summarized in the table below:
The node bounds are summarized in the table below:
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
{| class="wikitable"
!Node type
!Node Type
!Min number<br/> of keys
!Min Number of Keys
!Max number<br/> of keys
!Max Number of Keys
!Min number<br/> of child nodes
!Min Number of Child Nodes
!Max number<br/> of child nodes
!Max Number of Child Nodes
|-
|-
|Root node when it is a leaf node
|Root Node (when it is a leaf node)
|0
|0
|<math> K </math>
|<math> K </math>
Line 137: Line 139:
|0
|0
|-
|-
|Root node when it is an internal node
|Root Node (when it is an internal node)
|1
|1
|<math> K </math>
|<math> K </math>
Line 143: Line 145:
|<math> K + 1 </math>
|<math> K + 1 </math>
|-
|-
|Internal node
|Internal Node
|<math> \lfloor K/2 \rfloor </math>
|<math> \lfloor K/2 \rfloor </math>
|<math> K </math>
|<math> K </math>
Line 149: Line 151:
|<math> K + 1 </math>
|<math> K + 1 </math>
|-
|-
|Leaf node
|Leaf Node
|<math> \lceil K / 2 \rceil </math>
|<math> \lceil K / 2 \rceil </math>
|<math> K </math>
|<math> K </math>
Line 157: Line 159:


===Insertion and deletion===
===Insertion and deletion===
In order to maintain the predefined range of child nodes, internal nodes may be joined or split.
In order to maintain the pre-defined range of child nodes, internal nodes may be joined or split.


Usually, the number of keys is chosen to vary between <math>d</math> and <math>2d</math>, where <math>d</math> is the minimum number of keys, and <math>d+1</math> is the minimum [[Outdegree#Indegree and outdegree|degree]] or [[branching factor]] of the tree. The factor of 2 will guarantee that nodes can be split or combined.
Usually, the number of keys is chosen to vary between <math>d</math> and <math>2d</math>, where <math>d</math> is the minimum number of keys, and <math>d+1</math> is the minimum [[Outdegree#Indegree and outdegree|degree]] or [[branching factor]] of the tree. The factor of 2 will guarantee that nodes can be split or combined.
Line 182: Line 184:
{{Tone|section|date=May 2022}}
{{Tone|section|date=May 2022}}
===Time to search a sorted file===
===Time to search a sorted file===
Sorting and searching algorithms can be characterized by the number of comparison operations that must be performed using [[Big O notation|order notation]]. A [[binary search]] of a sorted table with {{mvar|N}} records, for example, can be done in roughly {{math|⌈ log<sub>2</sub> ''N'' ⌉}} comparisons. If the table had 1,000,000 records, then a specific record could be located with at most 20 comparisons: {{math|1=⌈ log<sub>2</sub> (1,000,000) ⌉ = 20}}.
Usually, sorting and searching algorithms have been characterized by the number of comparison operations that must be performed using [[Big O notation|order notation]]. A [[binary search]] of a sorted table with {{mvar|N}} records, for example, can be done in roughly {{math|⌈ log<sub>2</sub> ''N'' ⌉}} comparisons. If the table had 1,000,000 records, then a specific record could be located with at most 20 comparisons: {{math|1=⌈ log<sub>2</sub> (1,000,000) ⌉ = 20}}.


Large databases have historically been kept on disk drives. The time to read a record on a disk drive far exceeds the time needed to compare keys once the record is available due to [[seek time]] and a rotational delay. The seek time may be 0 to 20 or more milliseconds, and the rotational delay averages about half the rotation period. For a 7200 RPM drive, the rotation period is 8.33 milliseconds. For a drive such as the Seagate ST3500320NS, the track-to-track seek time is 0.8 milliseconds and the average reading seek time is 8.5 milliseconds.<ref>{{cite book |publisher=Seagate Technology LLC |title=Product Manual: Barracuda ES.2 Serial ATA, Rev. F., publication 100468393 |date=2008 |url=http://www.seagate.com/staticfiles/support/disc/manuals/NL35%20Series%20&%20BC%20ES%20Series/Barracuda%20ES.2%20Series/100468393f.pdf |page=6}}</ref> For simplicity, assume reading from disk takes about 10 milliseconds.
Large databases have historically been kept on disk drives. The time to read a record on a disk drive far exceeds the time needed to compare keys once the record is available. The time to read a record from a disk drive involves a [[seek time]] and a rotational delay. The seek time may be 0 to 20 or more milliseconds, and the rotational delay averages about half the rotation period. For a 7200 RPM drive, the rotation period is 8.33 milliseconds. For a drive such as the Seagate ST3500320NS, the track-to-track seek time is 0.8 milliseconds and the average reading seek time is 8.5 milliseconds.<ref>{{cite book |publisher=Seagate Technology LLC |title=Product Manual: Barracuda ES.2 Serial ATA, Rev. F., publication 100468393 |date=2008 |url=http://www.seagate.com/staticfiles/support/disc/manuals/NL35%20Series%20&%20BC%20ES%20Series/Barracuda%20ES.2%20Series/100468393f.pdf |page=6}}</ref> For simplicity, assume reading from disk takes about 10 milliseconds.


The time to locate one record out of a million in the example above would take 20 disk reads times 10 milliseconds per disk read, which is 0.2 seconds.
Naively, then, the time to locate one record out of a million would take 20 disk reads times 10 milliseconds per disk read, which is 0.2 seconds.


The search time is reduced because individual records are grouped together in a disk '''block'''. A disk block might be 16 kilobytes. If each record is 160 bytes, then 100 records could be stored in each block. The disk read time above was actually for an entire block. Once the disk head is in position, one or more disk blocks can be read with little delay. With 100 records per block, the last 6 or so comparisons don't need to do any disk reads—the comparisons are all within the last disk block read.
The time won't be that bad because individual records are grouped together in a disk '''block'''. A disk block might be 16 kilobytes. If each record is 160 bytes, then 100 records could be stored in each block. The disk read time above was actually for an entire block. Once the disk head is in position, one or more disk blocks can be read with little delay. With 100 records per block, the last 6 or so comparisons don't need to do any disk reads—the comparisons are all within the last disk block read.


To speed up the search further, the time to do the first 13 to 14 comparisons (which each required a disk access) must be reduced.
To speed the search further, the first 13 to 14 comparisons (which each required a disk access) must be sped up.


===An index speeds the search===
===An index speeds the search===
A B-tree [[Database index|index]] can be used to improve performance. A B-tree index creates a multi-level tree structure that breaks a database down into fixed-size blocks or pages. Each level of this tree can be used to link those pages via an address location, allowing one page (known as a node, or internal page) to refer to another with leaf pages at the lowest level. One page is typically the starting point of the tree, or the "root". This is where the search for a particular key would begin, traversing a path that terminates in a leaf. Most pages in this structure will be leaf pages which refer to specific table rows.
A significant improvement in performance can be made with a B-tree [[Database index|index]]. A B-tree index creates a multi-level tree structure that breaks a database down into fixed-size blocks or pages. Each level of this tree can be used to link those pages via an address location, allowing one page (known as a node, or internal page) to refer to another with leaf pages at the lowest level. One page is typically the starting point of the tree, or the "root". This is where the search for a particular key would begin, traversing a path that terminates in a leaf. Most pages in this structure will be leaf pages which ultimately refer to specific table rows.

Because each node (or internal page) can have more than two children, a B-tree index will usually have a shorter height (the distance from the root to the farthest leaf) than a Binary Search Tree. In the example above, initial disk reads narrowed the search range by a factor of two. That can be improved by creating an auxiliary index that contains the first record in each disk block (sometimes called a [[Database index#Sparse index|sparse index]]). This auxiliary index would be 1% of the size of the original database, but it can be searched quickly. Finding an entry in the auxiliary index would tell us which block to search in the main database; after searching the auxiliary index, we would have to search only that one block of the main database—at a cost of one more disk read.


In the above example the index would hold 10,000 entries and would take at most 14 comparisons to return a result. Like the main database, the last six or so comparisons in the auxiliary index would be on the same disk block. The index could be searched in about eight disk reads, and the desired record could be accessed in 9 disk reads.
Because each node (or internal page) can have more than two children, a B-tree index will usually have a shorter height (the distance from the root to the farthest leaf) than a Binary Search Tree. In the example above, initial disk reads narrowed the search range by a factor of two. That can be improved substantially by creating an auxiliary index that contains the first record in each disk block (sometimes called a [[Database index#Sparse index|sparse index]]). This auxiliary index would be 1% of the size of the original database, but it can be searched more quickly. Finding an entry in the auxiliary index would tell us which block to search in the main database; after searching the auxiliary index, we would have to search only that one block of the main database—at a cost of one more disk read. The index would hold 10,000 entries, so it would take at most 14 comparisons. Like the main database, the last six or so comparisons in the auxiliary index would be on the same disk block. The index could be searched in about eight disk reads, and the desired record could be accessed in 9 disk reads.


Creating an auxiliary index can be repeated to make an auxiliary index to the auxiliary index. That would make an aux-aux index that would need only 100 entries and would fit in one disk block.
The trick of creating an auxiliary index can be repeated to make an auxiliary index to the auxiliary index. That would make an aux-aux index that would need only 100 entries and would fit in one disk block.


Instead of reading 14 disk blocks to find the desired record, we only need to read 3 blocks. This blocking is the core idea behind the creation of the B-tree, where the disk blocks fill-out a hierarchy of levels to make up the index. Reading and searching the first (and only) block of the aux-aux index which is the root of the tree identifies the relevant block in aux-index in the level below. Reading and searching that aux-index block identifies the relevant block to read, until the final level, known as the leaf level, identifies a record in the main database. Instead of 150 milliseconds, we need only 30 milliseconds to get the record.
Instead of reading 14 disk blocks to find the desired record, we only need to read 3 blocks. This blocking is the core idea behind the creation of the B-tree, where the disk blocks fill-out a hierarchy of levels to make up the index. Reading and searching the first (and only) block of the aux-aux index which is the root of the tree identifies the relevant block in aux-index in the level below. Reading and searching that aux-index block identifies the relevant block to read, until the final level, known as the leaf level, identifies a record in the main database. Instead of 150 milliseconds, we need only 30 milliseconds to get the record.
Line 208: Line 208:


===Insertions and deletions===
===Insertions and deletions===
If the [[database]] does not change, then compiling the index is simple to do, and the index need never be changed. If there are changes, managing the database and its index require additional computation.
If the [[database]] does not change, then compiling the index is simple to do, and the index need never be changed. If there are changes, then managing the database and its index becomes more complicated.


Deleting records from a database is relatively easy. The index can stay the same, and the record can just be marked as deleted. The database remains in sorted order. If there are a large number of [[lazy deletion]]s, then searching and storage become less efficient.<ref>
Deleting records from a database is relatively easy. The index can stay the same, and the record can just be marked as deleted. The database remains in sorted order. If there are a large number of [[lazy deletion]]s, then searching and storage become less efficient.<ref>
Line 218: Line 218:
Insertions can be very slow in a sorted sequential file because room for the inserted record must be made. Inserting a record before the first record requires shifting all of the records down one. Such an operation is just too expensive to be practical. One solution is to leave some spaces. Instead of densely packing all the records in a block, the block can have some free space to allow for subsequent insertions. Those spaces would be marked as if they were "deleted" records.
Insertions can be very slow in a sorted sequential file because room for the inserted record must be made. Inserting a record before the first record requires shifting all of the records down one. Such an operation is just too expensive to be practical. One solution is to leave some spaces. Instead of densely packing all the records in a block, the block can have some free space to allow for subsequent insertions. Those spaces would be marked as if they were "deleted" records.


Both insertions and deletions are fast as long as space is available on a block. If an insertion won't fit on the block, then some free space on some nearby block must be found and the auxiliary indices adjusted. The best case is that enough space is available nearby so that the amount of block reorganization can be minimized. Alternatively, some out-of-sequence disk blocks may be used.<ref name="kleppmann_2017" />
Both insertions and deletions are fast as long as space is available on a block. If an insertion won't fit on the block, then some free space on some nearby block must be found and the auxiliary indices adjusted. The hope is that enough space is available nearby, such that a lot of blocks do not need to be reorganized. Alternatively, some out-of-sequence disk blocks may be used.<ref name="kleppmann_2017" />


===Advantages of B-tree usage for databases===
===Advantages of B-tree usage for databases===
Line 382: Line 382:
*[https://www.cs.usfca.edu/~galles/visualization/BTree.html Animated B-Tree visualization]
*[https://www.cs.usfca.edu/~galles/visualization/BTree.html Animated B-Tree visualization]
*[http://www.scholarpedia.org/article/B-tree_and_UB-tree B-tree and UB-tree on Scholarpedia] Curator: Dr Rudolf Bayer
*[http://www.scholarpedia.org/article/B-tree_and_UB-tree B-tree and UB-tree on Scholarpedia] Curator: Dr Rudolf Bayer
*[http://www.bluerwhite.org/btree B-Trees: Balanced Tree Data Structures] {{Webarchive|url=https://web.archive.org/web/20100305211920/http://www.bluerwhite.org/btree/ |date=2010-03-05 }}
*[http://www.bluerwhite.org/btree B-Trees: Balanced Tree Data Structures]
*[https://xlinux.nist.gov/dads/HTML/btree.html NIST's Dictionary of Algorithms and Data Structures: B-tree]
*[https://xlinux.nist.gov/dads/HTML/btree.html NIST's Dictionary of Algorithms and Data Structures: B-tree]
*[http://cis.stvincent.edu/html/tutorials/swd/btree/btree.html B-Tree Tutorial]
*[http://cis.stvincent.edu/html/tutorials/swd/btree/btree.html B-Tree Tutorial]
By publishing changes, you agree to the Terms of Use, and you irrevocably agree to release your contribution under the CC BY-SA 4.0 License and the GFDL. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel Editing help (opens in new window)

Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · §   Cite your sources: <ref></ref>


{{}}   {{{}}}   |   []   [[]]   [[Category:]]   #REDIRECT [[]]   &nbsp;   <s></s>   <sup></sup>   <sub></sub>   <code></code>   <pre></pre>   <blockquote></blockquote>   <ref></ref> <ref name="" />   {{Reflist}}   <references />   <includeonly></includeonly>   <noinclude></noinclude>   {{DEFAULTSORT:}}   <nowiki></nowiki>   <!-- -->   <span class="plainlinks"></span>


Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶   # ∞   ‹› «»   ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ № ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥   ♠ ♣ ♥ ♦   𝄫 ♭ ♮ ♯ 𝄪   © ® ™
Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ   B b   C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç   D d Ď ď Đ đ Ḍ ḍ Ð ð   E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə   F f   G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ   H h Ĥ ĥ Ħ ħ Ḥ ḥ   I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị   J j Ĵ ĵ   K k Ķ ķ   L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ   M m Ṃ ṃ   N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ   O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ   Ɔ ɔ   P p   Q q   R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ   S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß   T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ   U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ   V v   W w Ŵ ŵ   X x   Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ   Z z Ź ź Ż ż Ž ž   ß Ð ð Þ þ Ŋ ŋ Ə ə
Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ   Α α Β β Γ γ Δ δ   Ε ε Ζ ζ Η η Θ θ   Ι ι Κ κ Λ λ Μ μ   Ν ν Ξ ξ Ο ο Π π   Ρ ρ Σ σ ς Τ τ Υ υ   Φ φ Χ χ Ψ ψ Ω ω   {{Polytonic|}}
Cyrillic: А а Б б В в Г г   Ґ ґ Ѓ ѓ Д д Ђ ђ   Е е Ё ё Є є Ж ж   З з Ѕ ѕ И и І і   Ї ї Й й Ј ј К к   Ќ ќ Л л Љ љ М м   Н н Њ њ О о П п   Р р С с Т т Ћ ћ   У у Ў ў Ф ф Х х   Ц ц Ч ч Џ џ Ш ш   Щ щ Ъ ъ Ы ы Ь ь   Э э Ю ю Я я   ́
IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ   ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ   ɱ ɳ ɲ ŋ ɴ   ʋ ɹ ɻ ɰ   ʙ ⱱ ʀ ɾ ɽ   ɫ ɬ ɮ ɺ ɭ ʎ ʟ   ɥ ʍ ɧ   ʼ   ɓ ɗ ʄ ɠ ʛ   ʘ ǀ ǃ ǂ ǁ   ɨ ʉ ɯ   ɪ ʏ ʊ   ø ɘ ɵ ɤ   ə ɚ   ɛ œ ɜ ɝ ɞ ʌ ɔ   æ   ɐ ɶ ɑ ɒ   ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ   ˈ ˌ ː ˑ ̪   {{IPA|}}

Wikidata entities used in this page

Pages transcluded onto the current version of this page (help):