Class Engine
- All Implemented Interfaces:
Closeable,AutoCloseable
- Direct Known Subclasses:
InternalEngine,ReadOnlyEngine
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classstatic classstatic classstatic interfacestatic classstatic classstatic classWhether we should read history operations from translog or Lucene indexstatic classstatic classstatic classprotected static classA throttling class that can be activated, causing theacquireThrottlemethod to block on a lock when throttling is enabledstatic classprotected static classA Lock implementation that always allows the lock to be acquiredstatic classstatic classstatic classBase class for index and delete operation results Holds result meta data (e.g.static classstatic classstatic classstatic classstatic interfacestatic interfaceCalled for each new opened engine reader to warm new segments -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic Stringprotected static Stringprotected EngineConfigprotected Engine.EventListenerprotected org.apache.lucene.util.SetOnce<Exception>protected ReentrantLockstatic Stringstatic Stringprotected AtomicBooleanprotected longprotected org.apache.logging.log4j.Loggerstatic Stringstatic Stringprotected ReleasableLockprotected ReentrantReadWriteLockstatic Stringprotected ShardIdprotected Storestatic Stringprotected ReleasableLock -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionabstract CloseableacquireHistoryRetentionLock(Engine.HistorySource historySource)Acquires a lock on the translog files and Lucene soft-deleted documents to prevent them from being trimmedAcquires the index commit that should be included in a snapshot.abstract Engine.IndexCommitRefacquireLastIndexCommit(boolean flushFirst)Snapshots the most recent index and returns a handle to it.abstract Engine.IndexCommitRefSnapshots the most recent safe index commit from the engine.acquireSearcher(String source)acquireSearcher(String source, Engine.SearcherScope scope)acquireSearcher(String source, Engine.SearcherScope scope, Function<Engine.Searcher,Engine.Searcher> wrapper)Acquires a point-in-time reader that can be used to createEngine.Searchers on demand.acquireSearcherSupplier(Function<Engine.Searcher,Engine.Searcher> wrapper, Engine.SearcherScope scope)Acquires a point-in-time reader that can be used to createEngine.Searchers on demand.abstract voidRequest that this engine throttle incoming indexing requests to one thread.abstract voidadvanceMaxSeqNoOfUpdatesOrDeletes(long maxSeqNoOfUpdatesOnPrimary)A replica shard receives a new max_seq_no_of_updates from its primary shard, then calls this method to advance this marker to at least the given sequence number.voidclose()protected abstract voidcloseNoLock(String reason, CountDownLatch closedLatch)Method to close the engine while the write lock is held.get commits stats for the last commitabstract CompletionStatscompletionStats(String... fieldNamePatterns)Returns theCompletionStatsfor this engineconfig()abstract voidReverses a previousactivateThrottling()call.abstract Engine.DeleteResultdelete(Engine.Delete delete)Perform document delete operation on the engineprotected DocsStatsdocsStats(org.apache.lucene.index.IndexReader indexReader)docStats()Returns theDocsStatsfor this engineprotected voidprotected voidensureOpen(Exception suppressed)abstract booleanensureTranslogSynced(Stream<Translog.Location> locations)Ensures that all locations in the given stream have been written to the underlying storage.abstract intestimateNumberOfHistoryOperations(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo)Returns the estimated number of history operations whose seq# at leaststartingSeqNo(inclusive) in this engine.voidfailEngine(String reason, Exception failure)fail engine due to some error.protected voidfillSegmentStats(org.apache.lucene.index.SegmentReader segmentReader, boolean includeSegmentFileSizes, SegmentsStats stats)abstract intfillSeqNoGaps(long primaryTerm)Fills up the local checkpoints history with no-ops until the local checkpoint and the max seen sequence ID are identical.flush()Flushes the state of the engine including the transaction log, clearing memory and persisting documents in the lucene index to disk including a potentially heavy and durable fsync operation.abstract Engine.CommitIdflush(boolean force, boolean waitIfOngoing)Flushes the state of the engine including the transaction log, clearing memory.voidFlush the engine (committing segments to disk and truncating the translog) and close it.abstract voidforceMerge(boolean flush, int maxNumSegments, boolean onlyExpungeDeletes, boolean upgrade, boolean upgradeOnlyAncientSegments, String forceMergeUUID)Triggers a forced merge on this engineabstract Engine.GetResultget(Engine.Get get, MappingLookup mappingLookup, DocumentParser documentParser, Function<Engine.Searcher,Engine.Searcher> searcherWrapper)protected Engine.GetResultgetFromSearcher(Engine.Get get, Engine.Searcher searcher)abstract Stringreturns the history uuid for the engineabstract longHow much heap is used that would be freed by a refresh.abstract longReturns the number of milliseconds this engine was under index throttling.protected abstract org.apache.lucene.index.SegmentInfosabstract longReturns the latest global checkpoint value that has been persisted in the underlying storage (i.e.longReturns the timestamp of the last write in nanoseconds.longReturns the maximum auto_id_timestamp of all append-only index requests have been processed by this engine or the auto_id_timestamp received from its primary shard viaupdateMaxUnsafeAutoIdTimestamp(long).abstract longReturns the maximum sequence number of either update or delete operations have been processed in this engine or the sequence number fromadvanceMaxSeqNoOfUpdatesOrDeletes(long).abstract longGets the minimum retained sequence number for this engine.abstract longabstract ShardLongFieldRangegetRawFieldRange(String field)protected abstract org.apache.lucene.search.ReferenceManager<ElasticsearchDirectoryReader>abstract SafeCommitInfoabstract SeqNoStatsgetSeqNoStats(long globalCheckpoint)abstract Translog.LocationReturns the last location that the translog of this engine has written into.abstract TranslogStatsabstract longReturns how many bytes we are currently moving from heap to diskprotected static longguardedRamBytesUsed(org.apache.lucene.util.Accountable a)Returns 0 in the case where accountable is null, otherwise returnsramBytesUsed()abstract booleanhasCompleteOperationHistory(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo)Checks if this engine has every operations sincestartingSeqNo(inclusive) in its history (either Lucene or translog)abstract Engine.IndexResultindex(Engine.Index index)Perform document index operation on the engineabstract booleanReturns thetrueiff this engine is currently under index throttling.abstract booleanChecks if the underlying storage sync is required.protected booleanmaybeFailEngine(String source, Exception e)Check whether the engine should be failedabstract voidTries to prune buffered deletes from the version map.abstract booleanmaybeRefresh(String source)Synchronously refreshes the engine for new search operations to reflect the latest changes unless another thread is already refreshing the engine concurrently.abstract Translog.SnapshotnewChangesSnapshot(String source, MapperService mapperService, long fromSeqNo, long toSeqNo, boolean requiredFullRange)Creates a new history snapshot from Lucene for reading operations whose seqno in the requesting seqno range (both inclusive).abstract Engine.NoOpResultnoOp(Engine.NoOp noOp)voidonSettingsChanged(org.elasticsearch.core.TimeValue translogRetentionAge, ByteSizeValue translogRetentionSize, long softDeletesRetentionOps)abstract Translog.SnapshotreadHistoryOperations(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo)Creates a new history snapshot for reading operations sincestartingSeqNo(inclusive).abstract EnginerecoverFromTranslog(Engine.TranslogRecoveryRunner translogRecoveryRunner, long recoverUpToSeqNo)Performs recovery from the transaction log up torecoverUpToSeqNo(inclusive).abstract voidSynchronously refreshes the engine for new search operations to reflect the latest changes.booleanabstract intrestoreLocalHistoryFromTranslog(Engine.TranslogRecoveryRunner translogRecoveryRunner)This method replays translog to restore the Lucene index which might be reverted previously.abstract voidRolls the translog generation and cleans unneeded.segments(boolean verbose)The list of segments in the engine.segmentsStats(boolean includeSegmentFileSizes, boolean includeUnloadedSegments)Global stats on segments.abstract booleanChecks if this engine should be flushed periodically.abstract booleanTests whether or not the translog generation should be rolled to a new generation.abstract voidDo not replay translog operations, but make the engine be ready.abstract Engine.SyncedFlushResultsyncFlush(String syncId, Engine.CommitId expectedCommitId)Attempts to do a special commit where the given syncID is put into the commit data.abstract voidabstract voidtrimOperationsFromTranslog(long belowTerm, long aboveSeqNo)Trims translog for terms belowbelowTermand seq# aboveaboveSeqNoabstract voidchecks and removes translog files that no longer need to be retained.abstract voidupdateMaxUnsafeAutoIdTimestamp(long newTimestamp)Forces this engine to advance its max_unsafe_auto_id_timestamp marker to at least the given timestamp.voidPerforms the pre-closing checks on theEngine.abstract voidCalled when our engine is using too much heap and should move buffered indexed/deleted documents to disk.protected voidwriterSegmentStats(SegmentsStats stats)
-
Field Details
-
SYNC_COMMIT_ID
- See Also:
- Constant Field Values
-
HISTORY_UUID_KEY
- See Also:
- Constant Field Values
-
FORCE_MERGE_UUID_KEY
- See Also:
- Constant Field Values
-
MIN_RETAINED_SEQNO
- See Also:
- Constant Field Values
-
MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID
- See Also:
- Constant Field Values
-
SEARCH_SOURCE
- See Also:
- Constant Field Values
-
CAN_MATCH_SEARCH_SOURCE
- See Also:
- Constant Field Values
-
DOC_STATS_SOURCE
- See Also:
- Constant Field Values
-
shardId
-
logger
protected final org.apache.logging.log4j.Logger logger -
engineConfig
-
store
-
isClosed
-
eventListener
-
failEngineLock
-
rwl
-
readLock
-
writeLock
-
failedEngine
-
lastWriteNanos
protected volatile long lastWriteNanos
-
-
Constructor Details
-
Engine
-
-
Method Details
-
guardedRamBytesUsed
protected static long guardedRamBytesUsed(org.apache.lucene.util.Accountable a)Returns 0 in the case where accountable is null, otherwise returnsramBytesUsed() -
config
-
getLastCommittedSegmentInfos
protected abstract org.apache.lucene.index.SegmentInfos getLastCommittedSegmentInfos() -
getMergeStats
-
getHistoryUUID
returns the history uuid for the engine -
getWritingBytes
public abstract long getWritingBytes()Returns how many bytes we are currently moving from heap to disk -
completionStats
Returns theCompletionStatsfor this engine -
docStats
Returns theDocsStatsfor this engine -
docsStats
-
verifyEngineBeforeIndexClosing
Performs the pre-closing checks on theEngine.- Throws:
IllegalStateException- if the sanity checks failed
-
getIndexThrottleTimeInMillis
public abstract long getIndexThrottleTimeInMillis()Returns the number of milliseconds this engine was under index throttling. -
isThrottled
public abstract boolean isThrottled()Returns thetrueiff this engine is currently under index throttling.- See Also:
getIndexThrottleTimeInMillis()
-
trimOperationsFromTranslog
public abstract void trimOperationsFromTranslog(long belowTerm, long aboveSeqNo) throws EngineExceptionTrims translog for terms belowbelowTermand seq# aboveaboveSeqNo- Throws:
EngineException- See Also:
Translog.trimOperations(long, long)
-
index
Perform document index operation on the engine- Parameters:
index- operation to perform- Returns:
Engine.IndexResultcontaining updated translog location, version and document specific failures Note: engine level failures (i.e. persistent engine failures) are thrown- Throws:
IOException
-
delete
Perform document delete operation on the engine- Parameters:
delete- operation to perform- Returns:
Engine.DeleteResultcontaining updated translog location, version and document specific failures Note: engine level failures (i.e. persistent engine failures) are thrown- Throws:
IOException
-
noOp
- Throws:
IOException
-
syncFlush
public abstract Engine.SyncedFlushResult syncFlush(String syncId, Engine.CommitId expectedCommitId) throws EngineExceptionAttempts to do a special commit where the given syncID is put into the commit data. The attempt succeeds if there are not pending writes in lucene and the current point is equal to the expected one.- Parameters:
syncId- id of this syncexpectedCommitId- the expected value of- Returns:
- true if the sync commit was made, false o.w.
- Throws:
EngineException
-
getFromSearcher
protected final Engine.GetResult getFromSearcher(Engine.Get get, Engine.Searcher searcher) throws EngineException- Throws:
EngineException
-
get
public abstract Engine.GetResult get(Engine.Get get, MappingLookup mappingLookup, DocumentParser documentParser, Function<Engine.Searcher,Engine.Searcher> searcherWrapper) -
acquireSearcherSupplier
public final Engine.SearcherSupplier acquireSearcherSupplier(Function<Engine.Searcher,Engine.Searcher> wrapper) throws EngineExceptionAcquires a point-in-time reader that can be used to createEngine.Searchers on demand.- Throws:
EngineException
-
acquireSearcherSupplier
public Engine.SearcherSupplier acquireSearcherSupplier(Function<Engine.Searcher,Engine.Searcher> wrapper, Engine.SearcherScope scope) throws EngineExceptionAcquires a point-in-time reader that can be used to createEngine.Searchers on demand.- Throws:
EngineException
-
acquireSearcher
- Throws:
EngineException
-
acquireSearcher
public Engine.Searcher acquireSearcher(String source, Engine.SearcherScope scope) throws EngineException- Throws:
EngineException
-
acquireSearcher
public Engine.Searcher acquireSearcher(String source, Engine.SearcherScope scope, Function<Engine.Searcher,Engine.Searcher> wrapper) throws EngineException- Throws:
EngineException
-
getReferenceManager
protected abstract org.apache.lucene.search.ReferenceManager<ElasticsearchDirectoryReader> getReferenceManager(Engine.SearcherScope scope) -
isTranslogSyncNeeded
public abstract boolean isTranslogSyncNeeded()Checks if the underlying storage sync is required. -
ensureTranslogSynced
public abstract boolean ensureTranslogSynced(Stream<Translog.Location> locations) throws IOExceptionEnsures that all locations in the given stream have been written to the underlying storage.- Throws:
IOException
-
syncTranslog
- Throws:
IOException
-
acquireHistoryRetentionLock
Acquires a lock on the translog files and Lucene soft-deleted documents to prevent them from being trimmed -
newChangesSnapshot
public abstract Translog.Snapshot newChangesSnapshot(String source, MapperService mapperService, long fromSeqNo, long toSeqNo, boolean requiredFullRange) throws IOExceptionCreates a new history snapshot from Lucene for reading operations whose seqno in the requesting seqno range (both inclusive). This feature requires soft-deletes enabled. If soft-deletes are disabled, this method will throw anIllegalStateException.- Throws:
IOException
-
readHistoryOperations
public abstract Translog.Snapshot readHistoryOperations(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo) throws IOExceptionCreates a new history snapshot for reading operations sincestartingSeqNo(inclusive). The returned snapshot can be retrieved from either Lucene index or translog files.- Throws:
IOException
-
estimateNumberOfHistoryOperations
public abstract int estimateNumberOfHistoryOperations(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo) throws IOExceptionReturns the estimated number of history operations whose seq# at leaststartingSeqNo(inclusive) in this engine.- Throws:
IOException
-
hasCompleteOperationHistory
public abstract boolean hasCompleteOperationHistory(String reason, Engine.HistorySource historySource, MapperService mapperService, long startingSeqNo) throws IOExceptionChecks if this engine has every operations sincestartingSeqNo(inclusive) in its history (either Lucene or translog)- Throws:
IOException
-
getMinRetainedSeqNo
public abstract long getMinRetainedSeqNo()Gets the minimum retained sequence number for this engine.- Returns:
- the minimum retained sequence number
-
getTranslogStats
-
getTranslogLastWriteLocation
Returns the last location that the translog of this engine has written into. -
ensureOpen
-
ensureOpen
protected final void ensureOpen() -
commitStats
get commits stats for the last commit -
getPersistedLocalCheckpoint
public abstract long getPersistedLocalCheckpoint()- Returns:
- the persisted local checkpoint for this Engine
-
getSeqNoStats
- Returns:
- a
SeqNoStatsobject, using local state and the supplied global checkpoint
-
getLastSyncedGlobalCheckpoint
public abstract long getLastSyncedGlobalCheckpoint()Returns the latest global checkpoint value that has been persisted in the underlying storage (i.e. translog's checkpoint) -
segmentsStats
public SegmentsStats segmentsStats(boolean includeSegmentFileSizes, boolean includeUnloadedSegments)Global stats on segments. -
fillSegmentStats
protected void fillSegmentStats(org.apache.lucene.index.SegmentReader segmentReader, boolean includeSegmentFileSizes, SegmentsStats stats) -
writerSegmentStats
-
getIndexBufferRAMBytesUsed
public abstract long getIndexBufferRAMBytesUsed()How much heap is used that would be freed by a refresh. Note that this may throwAlreadyClosedException. -
segments
The list of segments in the engine. -
refreshNeeded
public boolean refreshNeeded() -
refresh
Synchronously refreshes the engine for new search operations to reflect the latest changes.- Throws:
EngineException
-
maybeRefresh
Synchronously refreshes the engine for new search operations to reflect the latest changes unless another thread is already refreshing the engine concurrently.- Returns:
trueif the a refresh happened. Otherwisefalse- Throws:
EngineException
-
writeIndexingBuffer
Called when our engine is using too much heap and should move buffered indexed/deleted documents to disk.- Throws:
EngineException
-
shouldPeriodicallyFlush
public abstract boolean shouldPeriodicallyFlush()Checks if this engine should be flushed periodically. This check is mainly based on the uncommitted translog size and the translog flush threshold setting. -
flush
Flushes the state of the engine including the transaction log, clearing memory.- Parameters:
force- iftruea lucene commit is executed even if no changes need to be committed.waitIfOngoing- iftruethis call will block until all currently running flushes have finished. Otherwise this call will return without blocking.- Returns:
- the commit Id for the resulting commit
- Throws:
EngineException
-
flush
Flushes the state of the engine including the transaction log, clearing memory and persisting documents in the lucene index to disk including a potentially heavy and durable fsync operation. This operation is not going to block if another flush operation is currently running and won't write a lucene commit if nothing needs to be committed.- Returns:
- the commit Id for the resulting commit
- Throws:
EngineException
-
trimUnreferencedTranslogFiles
checks and removes translog files that no longer need to be retained. SeeTranslogDeletionPolicyfor details- Throws:
EngineException
-
shouldRollTranslogGeneration
public abstract boolean shouldRollTranslogGeneration()Tests whether or not the translog generation should be rolled to a new generation. This test is based on the size of the current generation compared to the configured generation threshold size.- Returns:
trueif the current generation should be rolled to a new generation
-
rollTranslogGeneration
Rolls the translog generation and cleans unneeded.- Throws:
EngineException
-
forceMerge
public abstract void forceMerge(boolean flush, int maxNumSegments, boolean onlyExpungeDeletes, boolean upgrade, boolean upgradeOnlyAncientSegments, @Nullable String forceMergeUUID) throws EngineException, IOExceptionTriggers a forced merge on this engine- Throws:
EngineExceptionIOException
-
acquireLastIndexCommit
public abstract Engine.IndexCommitRef acquireLastIndexCommit(boolean flushFirst) throws EngineExceptionSnapshots the most recent index and returns a handle to it. If needed will try and "commit" the lucene index to make sure we have a "fresh" copy of the files to snapshot.- Parameters:
flushFirst- indicates whether the engine should flush before returning the snapshot- Throws:
EngineException
-
acquireSafeIndexCommit
Snapshots the most recent safe index commit from the engine.- Throws:
EngineException
-
acquireIndexCommitForSnapshot
Acquires the index commit that should be included in a snapshot.- Throws:
EngineException
-
getSafeCommitInfo
- Returns:
- a summary of the contents of the current safe commit
-
failEngine
fail engine due to some error. the engine will also be closed. The underlying store is marked corrupted iff failure is caused by index corruption -
maybeFailEngine
Check whether the engine should be failed -
closeNoLock
Method to close the engine while the write lock is held. Must decrement the supplied when closing work is done and resources are freed. -
flushAndClose
Flush the engine (committing segments to disk and truncating the translog) and close it.- Throws:
IOException
-
close
- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Throws:
IOException
-
onSettingsChanged
public void onSettingsChanged(org.elasticsearch.core.TimeValue translogRetentionAge, ByteSizeValue translogRetentionSize, long softDeletesRetentionOps) -
getLastWriteNanos
public long getLastWriteNanos()Returns the timestamp of the last write in nanoseconds. Note: this time might not be absolutely accurate since theEngine.Operation.startTime()is used which might be slightly inaccurate.- See Also:
System.nanoTime(),Engine.Operation.startTime()
-
activateThrottling
public abstract void activateThrottling()Request that this engine throttle incoming indexing requests to one thread. Must be matched by a later call todeactivateThrottling(). -
deactivateThrottling
public abstract void deactivateThrottling()Reverses a previousactivateThrottling()call. -
restoreLocalHistoryFromTranslog
public abstract int restoreLocalHistoryFromTranslog(Engine.TranslogRecoveryRunner translogRecoveryRunner) throws IOExceptionThis method replays translog to restore the Lucene index which might be reverted previously. This ensures that all acknowledged writes are restored correctly when this engine is promoted.- Returns:
- the number of translog operations have been recovered
- Throws:
IOException
-
fillSeqNoGaps
Fills up the local checkpoints history with no-ops until the local checkpoint and the max seen sequence ID are identical.- Parameters:
primaryTerm- the shards primary term this engine was created for- Returns:
- the number of no-ops added
- Throws:
IOException
-
recoverFromTranslog
public abstract Engine recoverFromTranslog(Engine.TranslogRecoveryRunner translogRecoveryRunner, long recoverUpToSeqNo) throws IOExceptionPerforms recovery from the transaction log up torecoverUpToSeqNo(inclusive). This operation will close the engine if the recovery fails.- Parameters:
translogRecoveryRunner- the translog recovery runnerrecoverUpToSeqNo- the upper bound, inclusive, of sequence number to be recovered- Throws:
IOException
-
skipTranslogRecovery
public abstract void skipTranslogRecovery()Do not replay translog operations, but make the engine be ready. -
maybePruneDeletes
public abstract void maybePruneDeletes()Tries to prune buffered deletes from the version map. -
getMaxSeenAutoIdTimestamp
public long getMaxSeenAutoIdTimestamp()Returns the maximum auto_id_timestamp of all append-only index requests have been processed by this engine or the auto_id_timestamp received from its primary shard viaupdateMaxUnsafeAutoIdTimestamp(long). Notes this method returns the auto_id_timestamp of all append-only requests, not max_unsafe_auto_id_timestamp. -
updateMaxUnsafeAutoIdTimestamp
public abstract void updateMaxUnsafeAutoIdTimestamp(long newTimestamp)Forces this engine to advance its max_unsafe_auto_id_timestamp marker to at least the given timestamp. The engine will disable optimization for all append-only whose timestamp at mostnewTimestamp. -
getMaxSeqNoOfUpdatesOrDeletes
public abstract long getMaxSeqNoOfUpdatesOrDeletes()Returns the maximum sequence number of either update or delete operations have been processed in this engine or the sequence number fromadvanceMaxSeqNoOfUpdatesOrDeletes(long). An index request is considered as an update operation if it overwrites the existing documents in Lucene index with the same document id.A note on the optimization using max_seq_no_of_updates_or_deletes: For each operation O, the key invariants are:
- I1: There is no operation on docID(O) with seqno that is > MSU(O) and < seqno(O)
- I2: If MSU(O) < seqno(O) then docID(O) did not exist when O was applied; more precisely, if there is any O' with seqno(O') < seqno(O) and docID(O') = docID(O) then the one with the greatest seqno is a delete.
When a receiving shard (either a replica or a follower) receives an operation O, it must first ensure its own MSU at least MSU(O), and then compares its MSU to its local checkpoint (LCP). If LCP < MSU then there's a gap: there may be some operations that act on docID(O) about which we do not yet know, so we cannot perform an add. Note this also covers the case where a future operation O' with seqNo(O') > seqNo(O) and docId(O') = docID(O) is processed before O. In that case MSU(O') is at least seqno(O') and this means MSU >= seqNo(O') > seqNo(O) > LCP (because O wasn't processed yet).
However, if MSU <= LCP then there is no gap: we have processed every operation <= LCP, and no operation O' with seqno(O') > LCP and seqno(O') < seqno(O) also has docID(O') = docID(O), because such an operation would have seqno(O') > LCP >= MSU >= MSU(O) which contradicts the first invariant. Furthermore in this case we immediately know that docID(O) has been deleted (or never existed) without needing to check Lucene for the following reason. If there's no earlier operation on docID(O) then this is clear, so suppose instead that the preceding operation on docID(O) is O': 1. The first invariant above tells us that seqno(O') <= MSU(O) <= LCP so we have already applied O' to Lucene. 2. Also MSU(O) <= MSU <= LCP < seqno(O) (we discard O if seqno(O) <= LCP) so the second invariant applies, meaning that the O' was a delete.
Therefore, if MSU <= LCP < seqno(O) we know that O can safely be optimized with and added to lucene with addDocument. Moreover, operations that are optimized using the MSU optimization must not be processed twice as this will create duplicates in Lucene. To avoid this we check the local checkpoint tracker to see if an operation was already processed.
- See Also:
advanceMaxSeqNoOfUpdatesOrDeletes(long)
-
advanceMaxSeqNoOfUpdatesOrDeletes
public abstract void advanceMaxSeqNoOfUpdatesOrDeletes(long maxSeqNoOfUpdatesOnPrimary)A replica shard receives a new max_seq_no_of_updates from its primary shard, then calls this method to advance this marker to at least the given sequence number. -
getRawFieldRange
- Returns:
- a
ShardLongFieldRangecontaining the min and max raw values of the given field for this shard if the engine guarantees these values never to change, orShardLongFieldRange.EMPTYif this field is empty, orShardLongFieldRange.UNKNOWNif this field's value range may change in future. - Throws:
IOException
-
getEngineConfig
-