lucene-2.9.0 索引过程(三) 过程简述
2009-10-30 15:00
447 查看
索引过程
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2454
DocumentsWriter.addDocument(Document, Analyzer) line: 750
DocumentsWriter.updateDocument(Document, Analyzer, Term) line: 772
DocFieldProcessorPerThread.processDocument() line: 254
DocInverterPerField.processFields(Fieldable[], int) line: 195
TermsHashPerField.add() line: 391
触发合并和写文档过程
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2475
IndexWriter.flush(boolean, boolean, boolean) line: 4166
IndexWriter.doFlush(boolean, boolean) line: 4175
IndexWriter.doFlushInternal(boolean, boolean) line: 4277
DocumentsWriter.flush(boolean) line: 581
DocFieldProcessor.flush(Collection, SegmentWriteState) line: 63
DocInverter.flush(Map, SegmentWriteState) line: 76
TermsHash.flush(Map, SegmentWriteState) line: 145
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
1.frq文件写入 docID(delta值) + tf
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 217
FormatPostingsDocsWriter.addDoc(int, int) line: 74
// 值大致是这么写的
final int delta = docID - lastDocID;
lastDocID = docID;
if (omitTermFreqAndPositions) //不记录频率和位置
out.writeVInt(delta);
else if (1 == termDocFreq)
out.writeVInt((delta<<1) | 1);
else {
out.writeVInt(delta<<1);
out.writeVInt(termDocFreq);
}
2.prx文件写入 position(delta值)
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 245
FormatPostingsPositionsWriter.addPosition(int, byte[], int, int) line: 56
3.tii文件写入
// 初始化
TermInfosWriter(Directory directory, String segment, FieldInfos fis,
int interval)
throws IOException {
initialize(directory, segment, fis, interval, false); // 生成tis
other = new TermInfosWriter(directory, segment, fis, interval, true);
other.other = this;
}
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 149
FormatPostingsFieldsWriter.finish() line: 71
TermInfosWriter.close() line: 220
// 值大致是这么写的
if (!isIndex && size % indexInterval == 0)
// other.add(lastFieldNumber, lastTermBytes, lastTermBytesLength, lastTi); // add an index term
{
output.writeVInt(start); // write shared prefix length
output.writeVInt(length); // write delta length
output.writeBytes(termBytes, start, length); // write delta bytes
output.writeVInt(fieldNumber); // write field num
output.writeVInt(ti.docFreq); // write doc freq
output.writeVLong(ti.freqPointer - lastTi.freqPointer); // write pointers
output.writeVLong(ti.proxPointer - lastTi.proxPointer);
if (ti.docFreq >= skipInterval)
{
output.writeVInt(ti.skipOffset);
}
if (isIndex)
{
output.writeVLong(other.output.getFilePointer() - lastIndexPointer);
lastIndexPointer = other.output.getFilePointer(); // write pointer
}
}
4.tis文件写入
// 初始化
TermInfosWriter(Directory directory, String segment, FieldInfos fis,
int interval)
throws IOException {
initialize(directory, segment, fis, interval, false);
other = new TermInfosWriter(directory, segment, fis, interval, true); // 生成tii
other.other = this;
}
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 276
FormatPostingsDocsWriter.finish() line: 113
TermInfosWriter.add(int, byte[], int, TermInfo) line: 169
TermInfosWriter.add(int, byte[], int, TermInfo) line: 171
TermInfosWriter.writeTerm(int, byte[], int) line: 196
值大致是这么写的
output.writeVInt(start); // write shared prefix length
output.writeVInt(length); // write delta length
output.writeBytes(termBytes, start, length); // write delta bytes
output.writeVInt(fieldNumber); // write field num
output.writeVInt(ti.docFreq); // write doc freq
output.writeVLong(ti.freqPointer - lastTi.freqPointer); // write pointers
output.writeVLong(ti.proxPointer - lastTi.proxPointer);
if (ti.docFreq >= skipInterval)
{
output.writeVInt(ti.skipOffset);
}
if (isIndex)
{
output.writeVLong(other.output.getFilePointer() - lastIndexPointer);
lastIndexPointer = other.output.getFilePointer(); // write pointer
}
5.写入复合文件cfs
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2475
IndexWriter.flush(boolean, boolean, boolean) line: 4166
IndexWriter.doFlush(boolean, boolean) line: 4175
IndexWriter.doFlushInternal(boolean, boolean) line: 4321
DocumentsWriter.createCompoundFile(String) line: 614
CompoundFileWriter.close() line: 137
合并以下文件
_0.frq
_0.nrm
_0.tii
_0.tis
_0.fnm
_0.prx
为_0.cfs
值大致是这么写的
os.writeVInt(entries.size());
{ // 遍历欲合并文件,写入文件名
os.writeLong(0); // for now
os.writeString(fe.file); // file name
}
{ // 遍历欲合并文件,拷贝数据
copyFile(fe, os, buffer);
}
{ // 遍历欲合并文件,写入文件指针
os.seek(fe.directoryOffset);
os.writeLong(fe.dataOffset);
}
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2454
DocumentsWriter.addDocument(Document, Analyzer) line: 750
DocumentsWriter.updateDocument(Document, Analyzer, Term) line: 772
DocFieldProcessorPerThread.processDocument() line: 254
DocInverterPerField.processFields(Fieldable[], int) line: 195
TermsHashPerField.add() line: 391
触发合并和写文档过程
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2475
IndexWriter.flush(boolean, boolean, boolean) line: 4166
IndexWriter.doFlush(boolean, boolean) line: 4175
IndexWriter.doFlushInternal(boolean, boolean) line: 4277
DocumentsWriter.flush(boolean) line: 581
DocFieldProcessor.flush(Collection, SegmentWriteState) line: 63
DocInverter.flush(Map, SegmentWriteState) line: 76
TermsHash.flush(Map, SegmentWriteState) line: 145
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
1.frq文件写入 docID(delta值) + tf
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 217
FormatPostingsDocsWriter.addDoc(int, int) line: 74
// 值大致是这么写的
final int delta = docID - lastDocID;
lastDocID = docID;
if (omitTermFreqAndPositions) //不记录频率和位置
out.writeVInt(delta);
else if (1 == termDocFreq)
out.writeVInt((delta<<1) | 1);
else {
out.writeVInt(delta<<1);
out.writeVInt(termDocFreq);
}
2.prx文件写入 position(delta值)
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 129
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 245
FormatPostingsPositionsWriter.addPosition(int, byte[], int, int) line: 56
3.tii文件写入
// 初始化
TermInfosWriter(Directory directory, String segment, FieldInfos fis,
int interval)
throws IOException {
initialize(directory, segment, fis, interval, false); // 生成tis
other = new TermInfosWriter(directory, segment, fis, interval, true);
other.other = this;
}
FreqProxTermsWriter.flush(Map, SegmentWriteState) line: 149
FormatPostingsFieldsWriter.finish() line: 71
TermInfosWriter.close() line: 220
// 值大致是这么写的
if (!isIndex && size % indexInterval == 0)
// other.add(lastFieldNumber, lastTermBytes, lastTermBytesLength, lastTi); // add an index term
{
output.writeVInt(start); // write shared prefix length
output.writeVInt(length); // write delta length
output.writeBytes(termBytes, start, length); // write delta bytes
output.writeVInt(fieldNumber); // write field num
output.writeVInt(ti.docFreq); // write doc freq
output.writeVLong(ti.freqPointer - lastTi.freqPointer); // write pointers
output.writeVLong(ti.proxPointer - lastTi.proxPointer);
if (ti.docFreq >= skipInterval)
{
output.writeVInt(ti.skipOffset);
}
if (isIndex)
{
output.writeVLong(other.output.getFilePointer() - lastIndexPointer);
lastIndexPointer = other.output.getFilePointer(); // write pointer
}
}
4.tis文件写入
// 初始化
TermInfosWriter(Directory directory, String segment, FieldInfos fis,
int interval)
throws IOException {
initialize(directory, segment, fis, interval, false);
other = new TermInfosWriter(directory, segment, fis, interval, true); // 生成tii
other.other = this;
}
FreqProxTermsWriter.appendPostings(FreqProxTermsWriterPerField[], FormatPostingsFieldsConsumer) line: 276
FormatPostingsDocsWriter.finish() line: 113
TermInfosWriter.add(int, byte[], int, TermInfo) line: 169
TermInfosWriter.add(int, byte[], int, TermInfo) line: 171
TermInfosWriter.writeTerm(int, byte[], int) line: 196
值大致是这么写的
output.writeVInt(start); // write shared prefix length
output.writeVInt(length); // write delta length
output.writeBytes(termBytes, start, length); // write delta bytes
output.writeVInt(fieldNumber); // write field num
output.writeVInt(ti.docFreq); // write doc freq
output.writeVLong(ti.freqPointer - lastTi.freqPointer); // write pointers
output.writeVLong(ti.proxPointer - lastTi.proxPointer);
if (ti.docFreq >= skipInterval)
{
output.writeVInt(ti.skipOffset);
}
if (isIndex)
{
output.writeVLong(other.output.getFilePointer() - lastIndexPointer);
lastIndexPointer = other.output.getFilePointer(); // write pointer
}
5.写入复合文件cfs
IndexWriter.addDocument(Document) line: 2428
IndexWriter.addDocument(Document, Analyzer) line: 2475
IndexWriter.flush(boolean, boolean, boolean) line: 4166
IndexWriter.doFlush(boolean, boolean) line: 4175
IndexWriter.doFlushInternal(boolean, boolean) line: 4321
DocumentsWriter.createCompoundFile(String) line: 614
CompoundFileWriter.close() line: 137
合并以下文件
_0.frq
_0.nrm
_0.tii
_0.tis
_0.fnm
_0.prx
为_0.cfs
值大致是这么写的
os.writeVInt(entries.size());
{ // 遍历欲合并文件,写入文件名
os.writeLong(0); // for now
os.writeString(fe.file); // file name
}
{ // 遍历欲合并文件,拷贝数据
copyFile(fe, os, buffer);
}
{ // 遍历欲合并文件,写入文件指针
os.seek(fe.directoryOffset);
os.writeLong(fe.dataOffset);
}
相关文章推荐
- [转帖]lucene-2.9.0 索引过程(一) TermsHashPerField
- lucene-2.9.0 索引过程(一) TermsHashPerField
- lucene-2.9.0 索引过程(四) 合并过程
- lucene-2.9.0 索引过程(二) FreqProxTermsWriter
- Lucene学习总结之四:Lucene索引过程分析(2)
- Lucene索引过程中的内存管理与数据存储 推荐
- Lucene学习总结之四:Lucene索引过程分析(4)
- 第一个lucene程序,把一个信息写入到索引库中、根据关键词把对象从索引库中提取出来、lucene读写过程分析
- 2.Lucene3.6.2包介绍,第一个Lucene案例介绍,查看索引信息的工具lukeall介绍,Luke查看的索引库内容,索引查找过程
- 2.Lucene3.6.2包介绍,第一个Lucene案例介绍,查看索引信息的工具lukeall介绍,Luke查看的索引库内容,索引查找过程
- lucene源码-倒排索引的读过程
- lucene 建立索引的过程
- Lucene索引创建过程
- Lucene 索引和搜索过程
- Lucene学习总结之四:Lucene索引过程分析(3)
- Lucene学习总结之四:Lucene索引过程分析(3)
- lucene-理解索引过程
- lucene-索引过程和搜索过程的核心类
- 2.Lucene3.6.2包介绍,第一个Lucene案例介绍,查看索引信息的工具lukeall介绍,Luke查看的索引库内容,索引查找过程
- Lucene3.6.2包介绍,第一个Lucene案例介绍,查看索引信息的工具lukeall介绍,Luke查看的索引库内容,索引查找过程