3 The tokenizer module in Nominatim is responsible for analysing the names given
4 to OSM objects and the terms of an incoming query in order to make sure, they
5 can be matched appropriately.
7 Nominatim currently offers only one tokenizer module, the ICU tokenizer. This section
8 describes the tokenizer and how it can be configured.
11 The selection of tokenizer is tied to a database installation. You need to choose
12 and configure the tokenizer before starting the initial import. Once the import
13 is done, you cannot switch to another tokenizer anymore. Reconfiguring the
14 chosen tokenizer is very limited as well. See the comments in each tokenizer
19 The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to
20 normalize names and queries. It also offers configurable decomposition and
21 abbreviation handling.
22 This tokenizer is currently the default.
24 To enable the tokenizer add the following line to your project configuration:
27 NOMINATIM_TOKENIZER=icu
32 On import the tokenizer processes names in the following three stages:
34 1. During the **Sanitizer step** incoming names are cleaned up and converted to
35 **full names**. This step can be used to regularize spelling, split multi-name
36 tags into their parts and tag names with additional attributes. See the
37 [Sanitizers section](#sanitizers) below for available cleaning routines.
38 2. The **Normalization** part removes all information from the full names
39 that are not relevant for search.
40 3. The **Token analysis** step takes the normalized full names and creates
41 all transliterated variants under which the name should be searchable.
42 See the [Token analysis](#token-analysis) section below for more
45 During query time, the tokeinzer is responsible for processing incoming
46 queries. This happens in two stages:
48 1. During **query preprocessing** the incoming text is split into name
49 chunks and normalised. This usually means applying the same normalisation
50 as during the import process but may involve other processing like,
51 for example, word break detection.
52 2. The **token analysis** step breaks down the query parts into tokens,
53 looks them up in the database and assigns them possible functions and
56 Query processing can be further customized while the rest of the analysis
61 The ICU tokenizer is configured using a YAML file which can be configured using
62 `NOMINATIM_TOKENIZER_CONFIG`. The configuration is read on import and then
63 saved as part of the internal database status. Later changes to the variable
66 Here is an example configuration file:
70 - step: split_japanese_phrases
73 - pattern: https?://[^\s]* # Filter URLs starting with http or https
79 - "ß > 'ss'" # German szet is unambiguously equal to double ss
81 - !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml
84 - step: split-name-list
88 - !include icu-rules/variants-ca.yaml
91 - bridge -> bdge,br,brdg,bri,brg
94 replacements: ['ä', 'ae']
97 The configuration file contains five sections:
98 `query-preprocessing`, `normalization`, `transliteration`, `sanitizers` and `token-analysis`.
100 #### Query preprocessing
102 The section for `query-preprocessing` defines an ordered list of functions
103 that are applied to the query before the token analysis.
105 The following is a list of preprocessors that are shipped with Nominatim.
109 ::: nominatim_api.query_preprocessing.normalize
113 docstring_section_style: spacy
117 ::: nominatim_api.query_preprocessing.regex_replace
121 docstring_section_style: spacy
123 This option runs any given regex pattern on the input and replaces values accordingly
125 - pattern: regex pattern
126 replace: string to replace with
129 #### Normalization and Transliteration
131 The normalization and transliteration sections each define a set of
132 ICU rules that are applied to the names.
134 The **normalization** rules are applied after sanitation. They should remove
135 any information that is not relevant for search at all. Usual rules to be
136 applied here are: lower-casing, removing of special characters, cleanup of
139 The **transliteration** rules are applied at the end of the tokenization
140 process to transfer the name into an ASCII representation. Transliteration can
141 be useful to allow for further fuzzy matching, especially between different
144 Each section must contain a list of
145 [ICU transformation rules](https://unicode-org.github.io/icu/userguide/transforms/general/rules.html).
146 The rules are applied in the order in which they appear in the file.
147 You can also include additional rules from external yaml file using the
148 `!include` tag. The included file must contain a valid YAML list of ICU rules
149 and may again include other files.
152 The ICU rule syntax contains special characters that conflict with the
153 YAML syntax. You should therefore always enclose the ICU rules in
158 The sanitizers section defines an ordered list of functions that are applied
159 to the name and address tags before they are further processed by the tokenizer.
160 They allows to clean up the tagging and bring it to a standardized form more
161 suitable for building the search index.
164 Sanitizers only have an effect on how the search index is built. They
165 do not change the information about each place that is saved in the
166 database. In particular, they have no influence on how the results are
167 displayed. The returned results always show the original information as
168 stored in the OpenStreetMap database.
170 Each entry contains information of a sanitizer to be applied. It has a
171 mandatory parameter `step` which gives the name of the sanitizer. Depending
172 on the type, it may have additional parameters to configure its operation.
174 The order of the list matters. The sanitizers are applied exactly in the order
175 that is configured. Each sanitizer works on the results of the previous one.
177 The following is a list of sanitizers that are shipped with Nominatim.
179 ##### split-name-list
181 ::: nominatim_db.tokenizer.sanitizers.split_name_list
185 docstring_section_style: spacy
187 ##### strip-brace-terms
189 ::: nominatim_db.tokenizer.sanitizers.strip_brace_terms
193 docstring_section_style: spacy
195 ##### tag-analyzer-by-language
197 ::: nominatim_db.tokenizer.sanitizers.tag_analyzer_by_language
201 docstring_section_style: spacy
203 ##### clean-housenumbers
205 ::: nominatim_db.tokenizer.sanitizers.clean_housenumbers
209 docstring_section_style: spacy
211 ##### clean-postcodes
213 ::: nominatim_db.tokenizer.sanitizers.clean_postcodes
217 docstring_section_style: spacy
219 ##### clean-tiger-tags
221 ::: nominatim_db.tokenizer.sanitizers.clean_tiger_tags
225 docstring_section_style: spacy
229 ::: nominatim_db.tokenizer.sanitizers.delete_tags
233 docstring_section_style: spacy
237 ::: nominatim_db.tokenizer.sanitizers.tag_japanese
241 docstring_section_style: spacy
245 Token analyzers take a full name and transform it into one or more normalized
246 form that are then saved in the search index. In its simplest form, the
247 analyzer only applies the transliteration rules. More complex analyzers
248 create additional spelling variants of a name. This is useful to handle
249 decomposition and abbreviation.
251 The ICU tokenizer may use different analyzers for different names. To select
252 the analyzer to be used, the name must be tagged with the `analyzer` attribute
253 by a sanitizer (see for example the
254 [tag-analyzer-by-language sanitizer](#tag-analyzer-by-language)).
256 The token-analysis section contains the list of configured analyzers. Each
257 analyzer must have an `id` parameter that uniquely identifies the analyzer.
258 The only exception is the default analyzer that is used when no special
259 analyzer was selected. There are analysers with special ids:
261 * '@housenumber'. If an analyzer with that name is present, it is used
262 for normalization of house numbers.
263 * '@potcode'. If an analyzer with that name is present, it is used
264 for normalization of postcodes.
266 Different analyzer implementations may exist. To select the implementation,
267 the `analyzer` parameter must be set. The different implementations are
268 described in the following.
270 ##### Generic token analyzer
272 The generic analyzer `generic` is able to create variants from a list of given
273 abbreviation and decomposition replacements and introduce spelling variations.
277 The optional 'variants' section defines lists of replacements which create alternative
278 spellings of a name. To create the variants, a name is scanned from left to
279 right and the longest matching replacement is applied until the end of the
282 The variants section must contain a list of replacement groups. Each group
283 defines a set of properties that describes where the replacements are
284 applicable. In addition, the word section defines the list of replacements
285 to be made. The basic replacement description is of the form:
288 <source>[,<source>[...]] => <target>[,<target>[...]]
291 The left side contains one or more `source` terms to be replaced. The right side
292 lists one or more replacements. Each source is replaced with each replacement
296 The source and target terms are internally normalized using the
297 normalization rules given in the configuration. This ensures that the
298 strings match as expected. In fact, it is better to use unnormalized
299 words in the configuration because then it is possible to change the
300 rules for normalization later without having to adapt the variant rules.
304 In its standard form, only full words match against the source. There
305 is a special notation to match the prefix and suffix of a word:
308 - ~strasse => str # matches "strasse" as full word and in suffix position
309 - hinter~ => hntr # matches "hinter" as full word and in prefix position
312 There is no facility to match a string in the middle of the word. The suffix
313 and prefix notation automatically trigger the decomposition mode: two variants
314 are created for each replacement, one with the replacement attached to the word
315 and one separate. So in above example, the tokenization of "hauptstrasse" will
316 create the variants "hauptstr" and "haupt str". Similarly, the name "rote strasse"
317 triggers the variants "rote str" and "rotestr". By having decomposition work
318 both ways, it is sufficient to create the variants at index time. The variant
319 rules are not applied at query time.
321 To avoid automatic decomposition, use the '|' notation:
327 simply changes "hauptstrasse" to "hauptstr" and "rote strasse" to "rote str".
329 ###### Initial and final terms
331 It is also possible to restrict replacements to the beginning and end of a
335 - ^south => s # matches only at the beginning of the name
336 - road$ => rd # matches only at the end of the name
339 So the first example would trigger a replacement for "south 45th street" but
340 not for "the south beach restaurant".
342 ###### Replacements vs. variants
344 The replacement syntax `source => target` works as a pure replacement. It changes
345 the name instead of creating a variant. To create an additional version, you'd
346 have to write `source => source,target`. As this is a frequent case, there is
347 a shortcut notation for it:
350 <source>[,<source>[...]] -> <target>[,<target>[...]]
353 The simple arrow causes an additional variant to be added. Note that
354 decomposition has an effect here on the source as well. So a rule
360 means that for a word like `hauptstrasse` four variants are created:
361 `hauptstrasse`, `haupt strasse`, `hauptstr` and `haupt str`.
365 The 'mutation' section in the configuration describes an additional set of
366 replacements to be applied after the variants have been computed.
368 Each mutation is described by two parameters: `pattern` and `replacements`.
369 The pattern must contain a single regular expression to search for in the
370 variant name. The regular expressions need to follow the syntax for
371 [Python regular expressions](file:///usr/share/doc/python3-doc/html/library/re.html#regular-expression-syntax).
372 Capturing groups are not permitted.
373 `replacements` must contain a list of strings that the pattern
374 should be replaced with. Each occurrence of the pattern is replaced with
375 all given replacements. Be mindful of combinatorial explosion of variants.
379 The generic analyser supports a special mode `variant-only`. When configured
380 then it consumes the input token and emits only variants (if any exist). Enable
387 to the analyser configuration.
389 ##### Housenumber token analyzer
391 The analyzer `housenumbers` is purpose-made to analyze house numbers. It
392 creates variants with optional spaces between numbers and letters. Thus,
393 house numbers of the form '3 a', '3A', '3-A' etc. are all considered equivalent.
395 The analyzer cannot be customized.
397 ##### Postcode token analyzer
399 The analyzer `postcodes` is pupose-made to analyze postcodes. It supports
400 a 'lookup' variant of the token, which produces variants with optional
401 spaces. Use together with the clean-postcodes sanitizer.
403 The analyzer cannot be customized.
407 Changing the configuration after the import is currently not possible, although
408 this feature may be added at a later time.