## Examples of org.apache.lucene.search.Similarity

• org.apache.lucene.search.Similarity
tanford.edu/IR-book/html/htmledition/queries-as-vectors-1.html"> Introduction To Information Retrieval, Chapter 6.

The following describes how Lucene scoring evolves from underlying information retrieval models to (efficient) implementation. We first brief on VSM Score, then derive from it Lucene's Conceptual Scoring Formula, from which, finally, evolves Lucene's Practical Scoring Function (the latter is connected directly with Lucene classes and methods).

Lucene combines Boolean model (BM) of Information Retrieval with Vector Space Model (VSM) of Information Retrieval - documents "approved" by BM are scored by VSM.

In VSM, documents and queries are represented as weighted vectors in a multi-dimensional space, where each distinct index term is a dimension, and weights are Tf-idf values.

VSM does not require weights to be Tf-idf values, but Tf-idf values are believed to produce search results of high quality, and so Lucene is using Tf-idf. Tf and Idf are described in more detail below, but for now, for completion, let's just say that for given term t and document (or query) x, Tf(t,x) varies with the number of occurrences of term t in x (when one increases so does the other) and idf(t) similarly varies with the inverse of the number of index documents containing term t.

VSM score of document d for query q is the Cosine Similarity of the weighted query vectors V(q) and V(d):

cosine-similarity(q,d)   =    V(q) · V(d) ––––––––– |V(q)| |V(d)|
VSM Score

Where V(q) · V(d) is the dot product of the weighted vectors, and |V(q)| and |V(d)| are their Euclidean norms.

Note: the above equation can be viewed as the dot product of the normalized weighted vectors, in the sense that dividing V(q) by its euclidean norm is normalizing it to a unit vector.

Lucene refines VSM score for both search quality and usability:

• Normalizing V(d) to the unit vector is known to be problematic in that it removes all document length information. For some documents removing this info is probably ok, e.g. a document made by duplicating a certain paragraph 10 times, especially if that paragraph is made of distinct terms. But for a document which contains no duplicated paragraphs, this might be wrong. To avoid this problem, a different document length normalization factor is used, which normalizes to a vector equal to or larger than the unit vector: doc-len-norm(d).
• At indexing, users can specify that certain documents are more important than others, by assigning a document boost. For this, the score of each document is also multiplied by its boost value doc-boost(d).
• Lucene is field based, hence each query term applies to a single field, document length normalization is by the length of the certain field, and in addition to document boost there are also document fields boosts.
• The same field can be added to a document during indexing several times, and so the boost of that field is the multiplication of the boosts of the separate additions (or parts) of that field within the document.
• At search time users can specify boosts to each query, sub-query, and each query term, hence the contribution of a query term to the score of a document is multiplied by the boost of that query term query-boost(q).
• A document may match a multi term query without containing all the terms of that query (this is correct for some of the queries), and users can further reward documents matching more query terms through a coordination factor, which is usually larger when more terms are matched: coord-factor(q,d).

Under the simplifying assumption of a single field in the index, we get Lucene's Conceptual scoring formula:

score(q,d)   =   coord-factor(q,d) ·   query-boost(q) ·    V(q) · V(d) ––––––––– |V(q)|
·   doc-len-norm(d)   ·   doc-boost(d)
Lucene Conceptual Scoring Formula

The conceptual formula is a simplification in the sense that (1) terms and documents are fielded and (2) boosts are usually per query term rather than per query.

We now describe how Lucene implements this conceptual scoring formula, and derive from it Lucene's Practical Scoring Function.

For efficient score computation some scoring components are computed and aggregated in advance:

• Query-boost for the query (actually for each query term) is known when search starts.
• Query Euclidean norm |V(q)| can be computed when search starts, as it is independent of the document being scored. From search optimization perspective, it is a valid question why bother to normalize the query at all, because all scored documents will be multiplied by the same |V(q)|, and hence documents ranks (their order by score) will not be affected by this normalization. There are two good reasons to keep this normalization:
• Recall that Cosine Similarity can be used find how similar two documents are. One can use Lucene for e.g. clustering, and use a document as a query to compute its similarity to other documents. In this use case it is important that the score of document d3 for query d1 is comparable to the score of document d3 for query d2. In other words, scores of a document for two distinct queries should be comparable. There are other applications that may require this. And this is exactly what normalizing the query vector V(q) provides: comparability (to a certain extent) of two or more queries.
• Applying query normalization on the scores helps to keep the scores around the unit vector, hence preventing loss of score data because of floating point precision limitations.
• Document length norm doc-len-norm(d) and document boost doc-boost(d) are known at indexing time. They are computed in advance and their multiplication is saved as a single value in the index: norm(d). (In the equations below, norm(t in d) means norm(field(t) in doc d) where field(t) is the field associated with term t.)

Lucene's Practical Scoring Function is derived from the above. The color codes demonstrate how it relates to those of the conceptual formula:

 score(q,d)   =   coord(q,d)  ·  queryNorm(q)  · ∑ ( tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d) ) t in q
Lucene Practical Scoring Function

where

1. tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. Note that tf(t in q) is assumed to be 1 and therefore it does not appear in this equation, However if a query contains twice the same term, there will be two term-queries with that same term and hence the computation would still be correct (although not very efficient). The default computation for tf(t in d) in {@link org.apache.lucene.search.DefaultSimilarity#tf(float) DefaultSimilarity} is:

 {@link org.apache.lucene.search.DefaultSimilarity#tf(float) tf(t in d)}   = frequency½

2. idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. idf(t) appears for t in both the query and the document, hence it is squared in the equation. The default computation for idf(t) in {@link org.apache.lucene.search.DefaultSimilarity#idf(int,int) DefaultSimilarity} is:

{@link org.apache.lucene.search.DefaultSimilarity#idf(int,int) idf(t)}  =   1 + log (  numDocs ––––––––– docFreq+1
)

3. coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in {@link #coord(int,int) coord(q,d)}by the Similarity in effect at search time.

4. queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation in {@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) DefaultSimilarity}produces a Euclidean norm:

queryNorm(q)   =   {@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) queryNorm(sumOfSquaredWeights)}  =    1 –––––––––––––– sumOfSquaredWeights½

The sum of squared weights (of the query terms) is computed by the query {@link org.apache.lucene.search.Weight} object.For example, a {@link org.apache.lucene.search.BooleanQuery boolean query}computes this value as:

 {@link org.apache.lucene.search.Weight#sumOfSquaredWeights() sumOfSquaredWeights}   =  {@link org.apache.lucene.search.Query#getBoost() q.getBoost()} 2 · ∑ ( idf(t)  ·  t.getBoost() ) 2 t in q

5. t.getBoost() is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to {@link org.apache.lucene.search.Query#setBoost(float) setBoost()}. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi {@link org.apache.lucene.search.TermQuery TermQuery} objects,and so the boost of a term in the query is accessible by calling the sub-query {@link org.apache.lucene.search.Query#getBoost() getBoost()}.

6. norm(t,d) encapsulates a few (indexing time) boost and length factors:
• Document boost - set by calling {@link org.apache.lucene.document.Document#setBoost(float) doc.setBoost()}before adding the document to the index.
• Field boost - set by calling {@link org.apache.lucene.document.Fieldable#setBoost(float) field.setBoost()}before adding the field to a document.
• {@link #lengthNorm(String,int) lengthNorm(field)} - computedwhen the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:

 norm(t,d)   =   {@link org.apache.lucene.document.Document#getBoost() doc.getBoost()} ·  {@link #lengthNorm(String,int) lengthNorm(field)} · ∏ {@link org.apache.lucene.document.Fieldable#getBoost() f.getBoost}() field f in d named as t

However the resulted norm value is {@link #encodeNorm(float) encoded} as a single bytebefore being stored. At search time, the norm byte value is read from the index {@link org.apache.lucene.store.Directory directory} and{@link #decodeNorm(byte) decoded} back to a float norm value.This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

Compression of norm values to a single byte saves memory at search time, because once a field is referenced at search time, its norms - for all documents - are maintained in memory.

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

Last, note that search time is too late to modify this norm part of scoring, e.g. by using a different {@link Similarity} for search.

@see #setDefault(Similarity) @see org.apache.lucene.index.IndexWriter#setSimilarity(Similarity) @see Searcher#setSimilarity(Similarity)

 `109310941095109610971098109911001101110211031104110511061107110811091110` ```    private Similarity cachedSimilarity;         @Override     public byte[] norms(String fieldName) {       byte[] norms = cachedNorms;       Similarity sim = getSimilarity();       if (fieldName != cachedFieldName || sim != cachedSimilarity) { // not cached?         Info info = getInfo(fieldName);         int numTokens = info != null ? info.numTokens : 0;         int numOverlapTokens = info != null ? info.numOverlapTokens : 0;         float boost = info != null ? info.getBoost() : 1.0f;         FieldInvertState invertState = new FieldInvertState(0, numTokens, numOverlapTokens, 0, boost);         float n = sim.computeNorm(fieldName, invertState);         byte norm = Similarity.encodeNorm(n);         norms = new byte[] {norm};                 // cache it for future reuse         cachedNorms = norms; ```
View Full Code Here

 `391392393394395396397398399400401` ```   public void testSpanScorerZeroSloppyFreq() throws Exception {     boolean ordered = true;     int slop = 1;     final Similarity sim = new DefaultSimilarity() {       @Override       public float sloppyFreq(int distance) {         return 0.0f;       }     }; ```
View Full Code Here

 `51525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134` ```  public void testSweetSpotLengthNorm() {       SweetSpotSimilarity ss = new SweetSpotSimilarity();     ss.setLengthNormFactors(1,1,0.5f);     Similarity d = new DefaultSimilarity();     Similarity s = ss;     // base case, should degrade       for (int i = 1; i < 1000; i++) {       assertEquals("base case: i="+i,                    d.lengthNorm("foo",i), s.lengthNorm("foo",i),                    0.0f);     }     // make a sweet spot       ss.setLengthNormFactors(3,10,0.5f);       for (int i = 3; i <=10; i++) {       assertEquals("3,10: spot i="+i,                    1.0f, s.lengthNorm("foo",i),                    0.0f);     }       for (int i = 10; i < 1000; i++) {       assertEquals("3,10: 10
View Full Code Here

 `135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180` ```   public void testSweetSpotTf() {       SweetSpotSimilarity ss = new SweetSpotSimilarity();     Similarity d = new DefaultSimilarity();     Similarity s = ss;         // tf equal     ss.setBaselineTfFactors(0.0f, 0.0f);       for (int i = 1; i < 1000; i++) {       assertEquals("tf: i="+i,                    d.tf(i), s.tf(i), 0.0f);     }     // tf higher       ss.setBaselineTfFactors(1.0f, 0.0f);       for (int i = 1; i < 1000; i++) {       assertTrue("tf: i="+i+" : d="+d.tf(i)+                  " < s="+s.tf(i),                  d.tf(i) < s.tf(i));     }     // tf flat       ss.setBaselineTfFactors(1.0f, 6.0f);     for (int i = 1; i <=6; i++) {       assertEquals("tf flat1: i="+i, 1.0f, s.tf(i), 0.0f);     }     ss.setBaselineTfFactors(2.0f, 6.0f);     for (int i = 1; i <=6; i++) {       assertEquals("tf flat2: i="+i, 2.0f, s.tf(i), 0.0f);     }     for (int i = 6; i <=1000; i++) {       assertTrue("tf: i="+i+" : s="+s.tf(i)+                  " < d="+d.tf(i),                  s.tf(i) < d.tf(i));     }     // stupidity     assertEquals("tf zero", 0.0f, s.tf(0), 0.0f);   } ```
View Full Code Here

 `187188189190191192193194195196197198199200201202203204205` ```          return hyperbolicTf(freq);         }       };     ss.setHyperbolicTfFactors(3.3f, 7.7f, Math.E, 5.0f);         Similarity s = ss;     for (int i = 1; i <=1000; i++) {       assertTrue("MIN tf: i="+i+" : s="+s.tf(i),                  3.3f <= s.tf(i));       assertTrue("MAX tf: i="+i+" : s="+s.tf(i),                  s.tf(i) <= 7.7f);     }     assertEquals("MID tf", 3.3f+(7.7f - 3.3f)/2.0f, s.tf(5), 0.00001f);         // stupidity     assertEquals("tf zero", 0.0f, s.tf(0), 0.0f);       } ```
View Full Code Here

 `160161162163164165166167168169170` ```      //System.out.println(msg);       lastScore = scores[i];   }   // override the norms to be inverted   Similarity s = new DefaultSimilarity() {     @Override     public float lengthNorm(String fieldName, int numTokens) {         return numTokens;     }       }; ```
View Full Code Here

 `451452453454455456457458459460461` ```        //System.err.println("total hits: " + results.totalHits);         //set similarity to use only the frequencies         //score is based on frequency of phrase only         searcher.setSimilarity(                 new Similarity() {                    public static final long serialVersionUID = 1L;                    public float coord(int overlap, int maxOverlap) {                       return 1;                    }                    public float queryNorm(float sumOfSquaredWeights) { ```
View Full Code Here

 `580581582583584585586587588589590` ```  private IndexSearcher buildSearcher(SearchFactoryImplementor searchFactoryImplementor) {     Map, DocumentBuilder> builders = searchFactoryImplementor.getDocumentBuilders();     List directories = new ArrayList();     Set idFieldNames = new HashSet();     Similarity searcherSimilarity = null;     //TODO check if caching this work for the last n list of classes makes a perf boost     if ( classes == null || classes.length == 0 ) {       // empty classes array means search over all indexed enities,       // but we have to make sure there is at least one       if ( builders.isEmpty() ) { ```
View Full Code Here

 `4041424344454647484950` ```  }   public void performWork(LuceneWork work, IndexWriter writer) {     DocumentBuilder documentBuilder = workspace.getDocumentBuilder( work.getEntityClass() );     Analyzer analyzer = documentBuilder.getAnalyzer();     Similarity similarity = documentBuilder.getSimilarity();     if ( log.isTraceEnabled() ) {       log.trace(           "add to Lucene index: {}#{}:{}",           new Object[] { work.getEntityClass(), work.getId(), work.getDocument() }       ); ```
View Full Code Here

 `1111111211131114111511161117111811191120112111221123112411251126112711281129` ```    private Similarity cachedSimilarity;         @Override     public byte[] norms(String fieldName) {       byte[] norms = cachedNorms;       Similarity sim = getSimilarity();       if (fieldName != cachedFieldName || sim != cachedSimilarity) { // not cached?         Info info = getInfo(fieldName);         int numTokens = info != null ? info.numTokens : 0;         int numOverlapTokens = info != null ? info.numOverlapTokens : 0;         float boost = info != null ? info.getBoost() : 1.0f;         FieldInvertState invertState = new FieldInvertState(0, numTokens, numOverlapTokens, 0, boost);         float n = sim.computeNorm(fieldName, invertState);         byte norm = sim.encodeNormValue(n);         norms = new byte[] {norm};                 // cache it for future reuse         cachedNorms = norms;         cachedFieldName = fieldName; ```
View Full Code Here