1 # Writing custom sanitizer and token analysis modules for the ICU tokenizer
3 The [ICU tokenizer](../customize/Tokenizers.md#icu-tokenizer) provides a
4 highly customizable method to pre-process and normalize the name information
5 of the input data before it is added to the search index. It comes with a
6 selection of sanitizers and token analyzers which you can use to adapt your
7 installation to your needs. If the provided modules are not enough, you can
8 also provide your own implementations. This section describes how to do that.
11 This API is currently in early alpha status. While this API is meant to
12 be a public API on which other sanitizers and token analyzers may be
13 implemented, it is not guaranteed to be stable at the moment.
16 ## Using non-standard sanitizers and token analyzers
18 Sanitizer names (in the `step` property) and token analysis names (in the
19 `analyzer`) may refer to externally supplied modules. There are two ways
20 to include external modules: through a library or from the project directory.
22 To include a module from a library, use the absolute import path as name and
23 make sure the library can be found in your PYTHONPATH.
25 To use a custom module without creating a library, you can put the module
26 somewhere in your project directory and then use the relative path to the
27 file. Include the whole name of the file including the `.py` ending.
29 ## Custom sanitizer modules
31 A sanitizer module must export a single factory function `create` with the
35 def create(config: SanitizerConfig) -> Callable[[ProcessInfo], None]
38 The function receives the custom configuration for the sanitizer and must
39 return a callable (function or class) that transforms the name and address
40 terms of a place. When a place is processed, then a `ProcessInfo` object
41 is created from the information that was queried from the database. This
42 object is sequentially handed to each configured sanitizer, so that each
43 sanitizer receives the result of processing from the previous sanitizer.
44 After the last sanitizer is finished, the resulting name and address lists
45 are forwarded to the token analysis module.
47 Sanitizer functions are instantiated once and then called for each place
48 that is imported or updated. They don't need to be thread-safe.
49 If multi-threading is used, each thread creates their own instance of
52 ### Sanitizer configuration
54 ::: nominatim.tokenizer.sanitizers.config.SanitizerConfig
59 ### The sanitation function
61 The sanitation function receives a single object of type `ProcessInfo`
62 which has with three members:
64 * `place`: read-only information about the place being processed.
66 * `names`: The current list of names for the place. Each name is a
68 * `address`: The current list of address names for the place. Each name
69 is a PlaceName object.
71 While the `place` member is provided for information only, the `names` and
72 `address` lists are meant to be manipulated by the sanitizer. It may add and
73 remove entries, change information within a single entry (for example by
74 adding extra attributes) or completely replace the list with a different one.
76 #### PlaceInfo - information about the place
78 ::: nominatim.data.place_info.PlaceInfo
84 #### PlaceName - extended naming information
86 ::: nominatim.data.place_name.PlaceName
91 ## Custom token analysis module
93 Setup of a token analyser is split into two parts: configuration and
94 analyser factory. A token analysis module must therefore implement two
97 ::: nominatim.tokenizer.token_analysis.base.AnalysisModule
103 ::: nominatim.tokenizer.token_analysis.base.Analyzer