Slides for our paper "Practical Performance of Space Efficient Data Structures for Longest Common Extensions" presented at ESA 2020

Abstract

For a text $T[1,n]$, a Longest Common Extension (LCE) query $lce_T(i,j)$ asks for the length of the longest common prefix of the suffixes $T[i,n]$ and $T[j,n]$ identified by their starting positions $1 \leq i,j \leq n$. A classic problem in stringology asks to preprocess a static text $T[1,n]$ over an alphabet of size $\sigma$ so that LCE queries can be efficiently answered on-line. Since its introduction in the 1980’s, this problem has found numerous applications: in suffix sorting, edit distance computation, approximate pattern matching, regularities finding, string mining, and many more. Text-book solutions offer $O(n)$ preprocessing time and $O(1)$ query time, but they employ memory-heavy data structures, such as suffix arrays, in practice several times bigger than the text itself.

Very recently, more space efficient solutions using $O(n\log\sigma)$ bits of total space or even only $O(\log n)$ bits of extra space have been proposed: string synchronizing sets [Kempa and Kociumaka, STOC’19, and Birenzwige et al., SODA’20] and in-place fingerprinting [Prezza, SODA’18]. The goal of this article is to present well-engineered implementations of these new solutions and study their practicality on a commonly agreed text corpus. We show that both perform extremely well in practice, with space consumption of only around 10% of the input size for string synchronizing sets (around 20% for highly repetitive texts), and essentially no extra space for fingerprinting. Interestingly, our experiments also show that both solutions become much faster than naive scanning even for finding common prefixes of moderate length, contradicting a common belief that sophisticated data structures for LCE queries are not competitive with naive approaches [Ilie and Tinta, SPIRE’09].