Ecosyste.ms: Packages

An open API service providing package, version and dependency metadata of many open source software ecosystems and registries.

Top 2.0% on proxy.golang.org
Top 0.9% dependent packages on proxy.golang.org
Top 1.4% dependent repos on proxy.golang.org
Top 3.6% forks on proxy.golang.org
Top 1.8% docker downloads on proxy.golang.org

proxy.golang.org : github.com/pemistahl/lingua-go

Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.

Registry - Source - Documentation - JSON
purl: pkg:golang/github.com/pemistahl/lingua-go
Keywords: go, golang-library, language-classification, language-detection, language-identification, language-modeling, language-processing, language-recognition, natural-language-processing, nlp, nlp-machine-learning, text-processing
License: Apache-2.0
Latest release: 9 months ago
First release: almost 3 years ago
Namespace: github.com/pemistahl
Dependent packages: 31
Dependent repositories: 14
Stars: 862 on GitHub
Forks: 45 on GitHub
Docker dependents: 1
Docker downloads: 34
See more repository details: repos.ecosyste.ms
Last synced: 8 days ago

Top 6.6% on proxy.golang.org
tryffel.net/go/virtualpaper v0.6.0
Virtualpaper is a document archive with full-text-search
27 versions - Latest release: about 1 month ago - 100 stars on GitHub
Top 8.5% on proxy.golang.org
github.com/tryffel/virtualpaper v0.6.0
Virtualpaper is a document archive with full-text-search
4 versions - Latest release: about 1 month ago - 108 stars on GitHub
Top 9.4% on proxy.golang.org
gitlab.com/plagiarism_checker/back v0.0.154
154 versions - Latest release: about 2 months ago
gitlab.com/back-libs/infrastructure v0.1.25
26 versions - Latest release: 2 months ago - 4 dependent packages - 0 stars on GitLab.com
github.com/xbclub/bilibilidanmurobot-core v1.1.4
bilibili弹幕机核心组件
17 versions - Latest release: 4 months ago - 0 stars on GitHub
github.com/xbclub/BilibiliDanmuRobot-Core v1.1.4
bilibili弹幕机核心组件
17 versions - Latest release: 4 months ago - 2 dependent packages - 0 stars on GitHub
gitlab.com/etke.cc/mrs/api v0.0.0-20240105094203-2ca07946a1eb
48 versions - Latest release: 5 months ago - 0 stars on GitLab.com
Top 8.4% on proxy.golang.org
github.com/ericvolp12/bsky-experiments v0.0.0-20231220080716-b0a703f35108
A handful of Go-based tools for poking around with BlueSky using the AT Protocol
226 versions - Latest release: 5 months ago - 1 dependent package - 1 dependent repositories - 18 stars on GitHub
gitlab.com/plagiarism-checker-group/backend/domains v0.0.31
31 versions - Latest release: 5 months ago - 5 dependent packages - 0 stars on GitLab.com
Top 9.7% on proxy.golang.org
github.com/suosi-inc/go-pkg-spider v0.24.0
A news website data extraction tool library implemented in Golang with relatively intelligent and...
38 versions - Latest release: 5 months ago - 1 dependent repositories - 53 stars on GitHub
gitlab.com/plagiarism-checker-group/backend/infrastructure v0.0.20
27 versions - Latest release: 6 months ago - 4 dependent packages - 0 stars on GitLab.com
gitlab.com/plagiarism-checker-group/backend/services v0.0.22
22 versions - Latest release: 7 months ago - 1 dependent package - 0 stars on GitLab.com
Top 8.2% on proxy.golang.org
github.com/seekr-osint/seekr v0.0.0-20230909165137-e5420c457bc0
A multi-purpose OSINT toolkit with a neat web-interface.
53 versions - Latest release: 9 months ago - 26 stars on GitHub
Top 6.2% on proxy.golang.org
github.com/pemistahl/lingua-go/cmd v0.0.0-20230905150314-320c87b00cfb
The most accurate natural language detection library for Go, suitable for long and short text alike
2 versions - Latest release: 9 months ago - 954 stars on GitHub
gitlab.com/webpage-speed-insights/backend v0.0.68
68 versions - Latest release: 9 months ago - 0 stars on GitLab.com
Top 6.0% on proxy.golang.org
github.com/yekta/stablecog/go-server v0.0.0-20230308145726-6db6a3a7fd55
114 versions - Latest release: 9 months ago - 122 stars on GitHub
Top 8.2% on proxy.golang.org
git.zc0901.com/go/god v1.3.49
80 versions - Latest release: 9 months ago
Top 8.6% on proxy.golang.org
github.com/practical-coder/tb v0.0.9
Toolbox: assortment of useful command line tools
5 versions - Latest release: 10 months ago - 1 stars on GitHub
Top 9.7% on proxy.golang.org
github.com/kakwa/wows-recruiting-bot v0.0.0-20230508150739-ed84dfeffb62
1 version - Latest release: about 1 year ago
Top 9.0% on proxy.golang.org
github.com/2110336-2565-2/cu-freelance-library v1.12.16
102 versions - Latest release: about 1 year ago - 4 dependent packages - 0 stars on GitHub
Top 9.7% on proxy.golang.org
git.sr.ht/~primalmotion/ldetect v1.1.0
2 versions - Latest release: about 1 year ago
Top 7.2% on proxy.golang.org
github.com/stablecog/go-apps/utils v0.0.0-20230212170529-bbb8a541f29a
Server and various backend components for stablecog.com
38 versions - Latest release: over 1 year ago - 2 dependent packages - 3 stars on GitHub
Top 6.2% on proxy.golang.org
github.com/Clinet/clinet v0.0.0-20221210012712-ca3b947f6574
A general purpose bot for chat services being rewritten for the modern era
3 versions - Latest release: over 1 year ago - 46 stars on GitHub
github.com/gotid/god v1.4.3
60 versions - Latest release: over 1 year ago - 1 dependent repositories - 0 stars on GitHub
Top 9.0% on proxy.golang.org
github.com/darren2046/go v0.0.157
157 versions - Latest release: almost 2 years ago - 0 stars on GitHub
Top 8.3% on proxy.golang.org
github.com/imba28/spolyr v1.0.2
a private Spotify lyrics index
7 versions - Latest release: almost 2 years ago - 3 stars on GitHub
Top 8.1% on proxy.golang.org
github.com/bmpi-dev/twitter-scraper v0.0.0-20220703154639-8956ca43f394
Scrape the Twitter Frontend API without authentication with Golang.
1 version - Latest release: almost 2 years ago - 1 stars on GitHub
Top 8.2% on proxy.golang.org
github.com/chaunceyshannon/golanglibs v0.0.134
Some Golang Function
134 versions - Latest release: almost 2 years ago - 0 stars on GitHub
Top 8.2% on proxy.golang.org
github.com/ChaunceyShannon/golanglibs v0.0.134
Some Golang Function
134 versions - Latest release: almost 2 years ago - 0 stars on GitHub
Top 5.4% on proxy.golang.org
github.com/mosuka/phalanx v0.4.0 💰
Phalanx is a cloud-native distributed search engine that provides endpoints through gRPC and trad...
5 versions - Latest release: over 2 years ago - 308 stars on GitHub
Top 7.5% on proxy.golang.org
github.com/lmullen/cchc v0.1.0
America's Public Bible for Computing Cultural Heritage in the Cloud
2 versions - Latest release: over 2 years ago - 5 stars on GitHub