| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Particularly tortured clusters might be buried under thousands of osdmap
epochs of thrashing with thousands of pgs. Rebuilding the past_intervals
becomes O(n^2) in that case, and can take days and days. Instead, do the
rebuild for all PGs in parallel during a single pass over the osdmap
history.
This is an ugly (mostly) one-time use hack that can removed soon.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
| |
Currently we drop and retake locks during handle_osd_map calls to
advance_map and activate_map. Instead, take them all once, and hold them.
This avoids leaving dirty in-core state in the PG without the lock held.
This will clearly go away as soon as the map threading stuff is redone.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
| |
Make sure we record any rewind_divergent_log. In the activate case, this
will happen anyway, but mark it dirty here for correctness/completeness.
The merge_log case might be a bug.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
| |
These are all called from within the state machine, so we can simply set
the dirty flags.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
| |
all_activated_and_committed() is called from _activate_committed(), called
from a objectstore completion, and also from the state machine, which is
part of a larger transaction.
Instead, set dirty_info, and build/apply a transaction in the caller
(the completion) as needed. Fixes part of #2360.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
| |
We shouldn't modify the local notion of the history without recording it to
disk. And we (probably) also don't need to do that at all on query.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In proc_replica_info and proc_primary_info, we may or may not update
the pg_info_t. If we do, set dirty_info, so that it will be recorded.
Same goes for when the primary pushes out updated stats to us.
Also, do not write a purged_snaps() update directory; rely on the caller
to write out dirty info.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we would check and write dirty_info *without the pg lock* after
doing the advance and activate map calls. This was unlikely to race with
anything because the queues were drained, but definitely not right.
Instead, do the write in activate_map, or explicitly if activate_map is
not called (so that we record our progress after handling maps when we are
not up).
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Share past intervals when starting up new replicas. This can happen via
an MOSDPGInfo or an MOSDPGLog message.
Fix up get_or_create_pg() so the past_intervals arg is required (and a ref,
like the other args). Fix doxygen comment.
Now the only time generate_past_intervals() should do any work is when
upgrading old clusters, during pg creation, and (possibly) during pg
split (when that is fully implemented).
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
| |
This ensures that we save our work.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
| |
If ceph-osd is way behind, we will advance through past maps before we
mark ourselves up. This avoids the slow recalculation once we are up, and
the ensuing badness.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
| |
There is a nice symmetry there with fulfill_log(), but it is a short
function with a single caller that mostly just forces us to copy a bunch
of data structures around unnecessarily. Drop it.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Send past_intervals along with pg_info_t on every notify. The reasoning
here is as follows:
- we already have the state in memory
- if we don't send it, and the primary doesn't have it, it will
recalculate it by reading/decoding many previous maps from disk
- for a highly-tortured cluster, i see past_intervals on the order of
~6 KB, times 600 pgs means ~2.5 MB sent for every activate_map(). for
comparison, the same cluster would need to read and decode ~1 GB of
maps to recalculate the same info.
- for healthy clusters, the data is small, and costs little.
- for unhealthy clusters, the data is large, but most useful.
In theory we could set a threshold so that we don't send it if it is
large, but allow the primary to query it explicitly. I doubt it's worth
the complexity.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can (currently) get into a situation where we don't have the full
history back to last_epoch_clean because non-primaries record past
intervals but don't initially have the full history, resulting in a partial
recent history.
If this happens, only fill in what's missing; no need to rebuild the recent
parts too.
Signed-off-by: Sage Weil <sage@newdream.net>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage@newdream.net>
|
|
|
|
|
|
|
|
|
|
|
| |
We may not recalculate all the way back to last_interval_clean due to
the oldest_map floor. Figure out what we want and could calculate before
deciding whether what we have is insufficient.
Also, print something if we discard and recalculate so it is clear what is
happening and why.
Signed-off-by: Sage Weil <sage@newdream.net>
|
|
|
|
| |
Signed-off-by: Sage Weil <sage@newdream.net>
|
|\ |
|
| |
| |
| |
| |
| | |
Fixes: 2356
Reviewed-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We may send an MOSDMap as a reply to various requests, including
- a failure report
- a boot message
- a pg_temp message
- an up_thru message
In these cases, send a single MOSDMap message, but limit how big it gets.
All recipients here are osds, which are smart enough to request more maps
based on the MOSDMap::newest_map field.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| |
| |
| |
| |
| |
| | |
From 92becb696bde7f0aa9687b2fe7505ed1ac9f493b
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| |
| |
| |
| | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| |
| |
| |
| | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| |
| |
| |
| |
| |
| | |
Also do some sanity checks on the subsystem log level settings.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This mimics the allows you to get and set subsystem debug levels via the
normal config access methods. Among other things, this allows librados
users to set debug levels.
Fixes: #2350
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| |
| |
| |
| |
| |
| | |
size_t was accidentally copy-pasted.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| |
| |
| |
| |
| |
| | |
On 2.6.32-5-amd64 (debian) and XFS I'm getting EINVAL.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| |
| |
| |
| | |
Signed-off-by: Joao Eduardo Luis <jecluis@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Peef_info_requested should be a strict subset of the probe set. Filter
osds that are dropped from probe from peer_info_requested. We could also
restart peering from scratch here, but this is less expensive, because we
don't have to re-probe everyone.
Once we adjust the probe and peer_info_requested sets, (re)check if we're
done: we may have been blocedk on a previous peer_info_requested entry.
The situation I saw was:
"recovery_state": [
{ "name": "Started\/Primary\/Peering\/GetInfo",
"enter_time": "2012-04-25 14:39:56.905748",
"requested_info_from": [
{ "osd": 193}]},
{ "name": "Started\/Primary\/Peering",
"enter_time": "2012-04-25 14:39:56.905748",
"probing_osds": [
79,
191,
195],
"down_osds_we_would_probe": [],
"peering_blocked_by": []},
{ "name": "Started",
"enter_time": "2012-04-25 14:39:56.905742"}]}
Once in this state, cycling osd.193 doesn't help, because the prior_set
is not affected.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
Reviewed-by: Samuel Just <samuel.just@dreamhost.com>
|
| |
| |
| |
| |
| |
| |
| | |
The MNotifyRec handler also posts GotInfo under the same conditions
after calling get_infos().
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| |
| |
| |
| | |
This reverts commit 9579365720818125a4b15741ae65e58948b9c69f.
|
| |
| |
| |
| | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We need to distinguish between the old 0 (meaning undefined) and
the new 0 (meaning switch to 0 and disable the flags). So rev the
encoding version on PGMap::Incremental, and if you decode an old
version with [near]full_ratio == 0, set the ratio to -1 instead. Then
when applying the Incremental interpret -1 as no change.
Signed-off-by: Greg Farnum <gregory.farnum@dreamhost.com>
Reviewed-by: Sage Weil <sage@newdream.net>
|
| |\
| | |
| | |
| | | |
Reviewed-by: Sage Weil <sage.weil@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* snap_set to a deleted (and recreated) snapshot
* resizing down (truncating) and back up
* resizing to non-object-aligned sizes
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This way we can't miss an update if we get a notify during ictx_refresh.
Specifically, a race like this:
Thread 1 Thread 2 Process 2
ictx_refresh()
read_header()
snap_create()
notify()
need_refresh = true
process header...
need_refresh = false
If this happened, we would not re-read the header with the new
snapshot, so the snapshot would not happen at the intended point
in time, but only after we re-read the header again.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* snapid should determine whether our mapped snapshot is gone, not snapname
* snap_set(<nonexistent_snap>) shouldn't reset us to CEPH_NOSNAP
* snapname should be set before using the it in the perfcounter name
* snapname and image name don't need to be passed as arguments since an
ImageCtx already contains that info
* ictx_check() doesn't need to check for non-existent snaps - only I/Os care,
so check in check_io() instead
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The earlier condition is >. != means < at this point, and the nesting
is unnecessary.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Compare *every* address for a match, or else note that it is (or might be)
different. Previously, we falsely took diff==0 to mean that all addrs
were definitely equal, which was not necessarily the case.
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Samuel Just <samuel.just@dreamhost.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We only deal with the case where the entire map is identical, since the
individual items are too small to make the pointer overhead worthwhile.
Too bad. A in-memory btree-like structure would work better for this.
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | |
| | |
| | |
| | |
| | |
| | | |
This will let us dedup later.
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On by default. This trades CPU for memory. Some might have unlimited RAM
and not care.
Signed-off-by: Sage Weil <sage@newdream.net>
|