Package org.apache.lucene.codecs.lucene42

Examples of org.apache.lucene.codecs.lucene42.Lucene42TermVectorsFormat

Term vectors are stored using two files

Looking up term vectors for any document requires at most 1 disk seek.

File formats

  1. A vector data file (extension .tvd). This file stores terms, frequencies, positions, offsets and payloads for every document. Upon writing a new segment, it accumulates data into memory until the buffer used to store terms and payloads grows beyond 4KB. Then it flushes all metadata, terms and positions to disk using LZ4 compression for terms and payloads and {@link BlockPackedWriter blocks of packed ints} for positions.

    Here is a more detailed description of the field data file format:

    • VectorData (.tvd) --> <Header>, PackedIntsVersion, ChunkSize, <Chunk>ChunkCount
    • Header --> {@link CodecUtil#writeHeader CodecHeader}
    • PackedIntsVersion --> {@link PackedInts#VERSION_CURRENT} as a {@link DataOutput#writeVInt VInt}
    • ChunkSize is the number of bytes of terms to accumulate before flushing, as a {@link DataOutput#writeVInt VInt}
    • ChunkCount is not known in advance and is the number of chunks necessary to store all document of the segment
    • Chunk --> DocBase, ChunkDocs, < NumFields >, < FieldNums >, < FieldNumOffs >, < Flags >, < NumTerms >, < TermLengths >, < TermFreqs >, < Positions >, < StartOffsets >, < Lengths >, < PayloadLengths >, < TermAndPayloads >
    • DocBase is the ID of the first doc of the chunk as a {@link DataOutput#writeVInt VInt}
    • ChunkDocs is the number of documents in the chunk
    • NumFields --> DocNumFieldsChunkDocs
    • DocNumFields is the number of fields for each doc, written as a {@link DataOutput#writeVInt VInt} if ChunkDocs==1 and as a {@link PackedInts} array otherwise
    • FieldNums --> FieldNumDeltaTotalDistincFields, a delta-encoded list of the sorted unique field numbers present in the chunk
    • FieldNumOffs --> FieldNumOffTotalFields, as a {@link PackedInts} array
    • FieldNumOff is the offset of the field number in FieldNums
    • TotalFields is the total number of fields (sum of the values of NumFields)
    • Flags --> Bit < FieldFlags >
    • Bit is a single bit which when true means that fields have the same options for every document in the chunk
    • FieldFlags --> if Bit==1: FlagTotalDistinctFields else FlagTotalFields
    • Flag: a 3-bits int where:
      • the first bit means that the field has positions
      • the second bit means that the field has offsets
      • the third bit means that the field has payloads
    • NumTerms --> FieldNumTermsTotalFields
    • FieldNumTerms: the number of terms for each field, using {@link BlockPackedWriter blocks of 64 packed ints}
    • TermLengths --> PrefixLengthTotalTerms SuffixLengthTotalTerms
    • TotalTerms: total number of terms (sum of NumTerms)
    • PrefixLength: 0 for the first term of a field, the common prefix with the previous term otherwise using {@link BlockPackedWriter blocks of 64 packed ints}
    • SuffixLength: length of the term minus PrefixLength for every term using {@link BlockPackedWriter blocks of 64 packed ints}
    • TermFreqs --> TermFreqMinus1TotalTerms
    • TermFreqMinus1: (frequency - 1) for each term using {@link BlockPackedWriter blocks of 64 packed ints}
    • Positions --> PositionDeltaTotalPositions
    • TotalPositions is the sum of frequencies of terms of all fields that have positions
    • PositionDelta: the absolute position for the first position of a term, and the difference with the previous positions for following positions using {@link BlockPackedWriter blocks of 64 packed ints}
    • StartOffsets --> (AvgCharsPerTermTotalDistinctFields) StartOffsetDeltaTotalOffsets
    • TotalOffsets is the sum of frequencies of terms of all fields that have offsets
    • AvgCharsPerTerm: average number of chars per term, encoded as a float on 4 bytes. They are not present if no field has both positions and offsets enabled.
    • StartOffsetDelta: (startOffset - previousStartOffset - AvgCharsPerTerm * PositionDelta). previousStartOffset is 0 for the first offset and AvgCharsPerTerm is 0 if the field has no positions using {@link BlockPackedWriter blocks of 64 packed ints}
    • Lengths --> LengthMinusTermLengthTotalOffsets
    • LengthMinusTermLength: (endOffset - startOffset - termLength) using {@link BlockPackedWriter blocks of 64 packed ints}
    • PayloadLengths --> PayloadLengthTotalPayloads
    • TotalPayloads is the sum of frequencies of terms of all fields that have payloads
    • PayloadLength is the payload length encoded using {@link BlockPackedWriter blocks of 64 packed ints}
    • TermAndPayloads --> LZ4-compressed representation of < FieldTermsAndPayLoads >TotalFields
    • FieldTermsAndPayLoads --> Terms (Payloads)
    • Terms: term bytes
    • Payloads: payload bytes (if the field has payloads)
  2. An index file (extension .tvx).

    • VectorIndex (.tvx) --> <Header>, <ChunkIndex>
    • Header --> {@link CodecUtil#writeHeader CodecHeader}
    • ChunkIndex: See {@link CompressingStoredFieldsIndexWriter}
@lucene.experimental

        dvFormat = DocValuesFormat.forName(formats[random.nextInt(formats.length)]);
      } else {
        dvFormat = DocValuesFormat.forName(TEST_DOCVALUESFORMAT);
      }
     
      codec = new Lucene42Codec() {      
        @Override
        public PostingsFormat getPostingsFormatForField(String field) {
          return format;
        }
View Full Code Here


    Directory directory = newDirectory();
    // we don't use RandomIndexWriter because it might add more docvalues than we expect !!!!1
    IndexWriterConfig iwc = newIndexWriterConfig(TEST_VERSION_CURRENT, analyzer);
    final DocValuesFormat fast = DocValuesFormat.forName("Lucene42");
    final DocValuesFormat slow = DocValuesFormat.forName("SimpleText");
    iwc.setCodec(new Lucene42Codec() {
      @Override
      public DocValuesFormat getDocValuesFormatForField(String field) {
        if ("dv1".equals(field)) {
          return fast;
        } else {
View Full Code Here

        dvFormat = DocValuesFormat.forName(formats[random.nextInt(formats.length)]);
      } else {
        dvFormat = DocValuesFormat.forName(TEST_DOCVALUESFORMAT);
      }
     
      codec = new Lucene42Codec() {      
        @Override
        public PostingsFormat getPostingsFormatForField(String field) {
          return format;
        }
View Full Code Here

    // (and maybe their params, too) to infostream on flush and merge.
    // otherwise in a real debugging situation we won't know whats going on!
    if (LuceneTestCase.VERBOSE) {
      System.out.println("forcing postings format to:" + format);
    }
    return new Lucene42Codec() {
      @Override
      public PostingsFormat getPostingsFormatForField(String field) {
        return format;
      }
    };
View Full Code Here

    // (and maybe their params, too) to infostream on flush and merge.
    // otherwise in a real debugging situation we won't know whats going on!
    if (LuceneTestCase.VERBOSE) {
      System.out.println("forcing docvalues format to:" + format);
    }
    return new Lucene42Codec() {
      @Override
      public DocValuesFormat getDocValuesFormatForField(String field) {
        return format;
      }
    };
View Full Code Here

 
  public void testWriteReadMerge() throws IOException {
    // get another codec, other than the default: so we are merging segments across different codecs
    final Codec otherCodec;
    if ("SimpleText".equals(Codec.getDefault().getName())) {
      otherCodec = new Lucene42Codec();
    } else {
      otherCodec = new SimpleTextCodec();
    }
    Directory dir = newDirectory();
    IndexWriterConfig iwConf = newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random()));
View Full Code Here

  /** Override this to customize index settings, e.g. which
   *  codec to use. */
  protected IndexWriterConfig getIndexWriterConfig(Version matchVersion, Analyzer indexAnalyzer) {
    IndexWriterConfig iwc = new IndexWriterConfig(matchVersion, indexAnalyzer);
    iwc.setCodec(new Lucene42Codec());
    iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
    return iwc;
  }
View Full Code Here

    }
    dir.close();
  }
 
  public void testSameCodecDifferentInstance() throws Exception {
    Codec codec = new Lucene42Codec() {
      @Override
      public PostingsFormat getPostingsFormatForField(String field) {
        if ("id".equals(field)) {
          return new Pulsing41PostingsFormat(1);
        } else if ("date".equals(field)) {
View Full Code Here

    };
    doTestMixedPostings(codec);
  }
 
  public void testSameCodecDifferentParams() throws Exception {
    Codec codec = new Lucene42Codec() {
      @Override
      public PostingsFormat getPostingsFormatForField(String field) {
        if ("id".equals(field)) {
          return new Pulsing41PostingsFormat(1);
        } else if ("date".equals(field)) {
View Full Code Here

    dir.close();
  }
 
  private static final class UnRegisteredCodec extends FilterCodec {
    public UnRegisteredCodec() {
      super("NotRegistered", new Lucene42Codec());
    }
View Full Code Here

TOP

Related Classes of org.apache.lucene.codecs.lucene42.Lucene42TermVectorsFormat

Copyright © 2018 www.massapicom. All rights reserved.
All source code are property of their respective owners. Java is a trademark of Sun Microsystems, Inc and owned by ORACLE Inc. Contact coftware#gmail.com.