diff options
| author | Alan Conway <aconway@apache.org> | 2010-10-27 15:30:49 +0000 |
|---|---|---|
| committer | Alan Conway <aconway@apache.org> | 2010-10-27 15:30:49 +0000 |
| commit | aae11121cfcf891b2365241141f9ab9cb47d3024 (patch) | |
| tree | a9027b00714282b00910b07c8a2c821a0a280797 /cpp | |
| parent | 328d20e1bb2187d741c892387e06a559739e9363 (diff) | |
| download | qpid-python-aae11121cfcf891b2365241141f9ab9cb47d3024.tar.gz | |
Updates to new cluster design.
git-svn-id: https://svn.apache.org/repos/asf/qpid/trunk/qpid@1028006 13f79535-47bb-0310-9956-ffa450edef68
Diffstat (limited to 'cpp')
| -rw-r--r-- | cpp/src/qpid/cluster/new-cluster-active-passive.txt | 4 | ||||
| -rw-r--r-- | cpp/src/qpid/cluster/new-cluster-design.txt | 12 | ||||
| -rw-r--r-- | cpp/src/qpid/cluster/new-cluster-plan.txt | 4 |
3 files changed, 19 insertions, 1 deletions
diff --git a/cpp/src/qpid/cluster/new-cluster-active-passive.txt b/cpp/src/qpid/cluster/new-cluster-active-passive.txt index aa41f530b2..3463d279c6 100644 --- a/cpp/src/qpid/cluster/new-cluster-active-passive.txt +++ b/cpp/src/qpid/cluster/new-cluster-active-passive.txt @@ -37,8 +37,8 @@ Simpler implementation of broker::Cluster: - can use smaller message IDs: just sequence number. Can be implicit. Extra requirements: +- Exactly one broker hast to take over if primary fails. - Passive members refuse client connections and redirect to active member. -- Choose new active member when the active dies. - On failover, clients keep trying till they find the active member. ** Active/active vs. active passive @@ -47,6 +47,7 @@ Active/active benefits: - Total # connections: practical 60k limit per node. - Handle client losing connectivity to one cluster node - can fail over to any. - Some load sharing: reading from client + multicast only done on direct node. +- Clients can switch to any broker Active/active drawbacks: - Co-ordinating message allocation impacts performance. @@ -57,3 +58,4 @@ Active/passive benefits: Active/passive drawbacks: - Can't help clients with no connectivity to the active member. - Clients must find the single active node in failover. +- May have gaps where no broker is active for some period of time. diff --git a/cpp/src/qpid/cluster/new-cluster-design.txt b/cpp/src/qpid/cluster/new-cluster-design.txt index e683aaf576..8d9f72ac02 100644 --- a/cpp/src/qpid/cluster/new-cluster-design.txt +++ b/cpp/src/qpid/cluster/new-cluster-design.txt @@ -349,6 +349,18 @@ can get better performance because we don't need to transfer ownership or information about acquisition. We need to optimize this case to perform like an active-passive mode of replication. +** Increasing Concurrency and load sharing +The current cluster is bottlenecked by processing everything in the +CPG deliver thread. By removing the need for identical operation on +each broker, we open up the possiblility of greater concurrency. + +Handling multicast enqueue, acquire, accpet, release etc: concurrency per queue. +Operatons on different queues can be done in different threads. + +The new design does not force each broker to do all the work in the +CPG thread so spreading load across cluster members should give some +scale-up. + ** Misc outstanding issues & notes Replicating wiring diff --git a/cpp/src/qpid/cluster/new-cluster-plan.txt b/cpp/src/qpid/cluster/new-cluster-plan.txt index 22c952a3d3..35f35288cc 100644 --- a/cpp/src/qpid/cluster/new-cluster-plan.txt +++ b/cpp/src/qpid/cluster/new-cluster-plan.txt @@ -398,6 +398,10 @@ No integration with DTX transactions. ** TODO [#B] Make new cluster work with replication exchange. Possibly re-use some common logic. Replication exchange is like clustering except over TCP. +** TODO [#B] Better concurrency, scalabiility on multi-cores. +Introduce PollableQueue of operations per broker queue. Queue up mcast +operations (enqueue, acquire, accept etc.) to be handled concurrently +on different queue. Performance testing to verify improved scalability. ** TODO [#C] Async completion for declare, bind, destroy queues and exchanges. Cluster needs to complete these asynchronously to guarantee resources exist across the cluster when the command completes. |
