CN112650592B - Task load balancing and high-availability system based on zookeeper - Google Patents
Task load balancing and high-availability system based on zookeeper Download PDFInfo
- Publication number
- CN112650592B CN112650592B CN202110011599.9A CN202110011599A CN112650592B CN 112650592 B CN112650592 B CN 112650592B CN 202110011599 A CN202110011599 A CN 202110011599A CN 112650592 B CN112650592 B CN 112650592B
- Authority
- CN
- China
- Prior art keywords
- module
- task
- node
- information
- zookeeper
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 31
- 238000012217 deletion Methods 0.000 claims abstract description 18
- 230000037430 deletion Effects 0.000 claims abstract description 18
- 230000001960 triggered effect Effects 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 238000013508 migration Methods 0.000 claims description 4
- 230000005012 migration Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000005856 abnormality Effects 0.000 description 2
- 101150039208 KCNK3 gene Proteins 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a task load balancing and high-availability system based on zookeeper. The system comprises an application starting module and a control module, wherein the application starting module is used for starting the running application program; the creating module is used for creating temporary nodes of the application and zookeeper clusters; the zookeeper cluster module is used for storing temporary node information; the task issuing module is used for creating a task; the load calculation module is used for calculating the application with less task amount; the task processing module is used for executing tasks and writing node information of the zookeeper cluster; the task ending module is used for processing the task state and deleting the node information of the zookeeper cluster; the monitoring module is used for monitoring temporary nodes in the zookeeper cluster; the received node deletion event module is used for judging the monitored content; the task information judging module is used for detecting node information of the zookeeper cluster and determining a subsequent flow. The beneficial effects of the invention are as follows: and the task load balance and high availability are realized, the tasks are automatically migrated to other applications to run in a seamless manner, and the infinite lateral expansion of the applications is supported.
Description
Technical Field
The invention relates to the technical field of application task load, in particular to a task load balancing and high-availability system based on zookeeper.
Background
The load balancing and high availability of the current application are mostly realized by adopting prepositive agents such as software or hardware of nginx, F5 and the like, and the mode can achieve high availability and related loads, but after the application is abnormal or a machine to which the application belongs is down, the tasks originally run on the application cannot be automatically migrated, so that manual intervention is often required, the risk is high and the cost is high.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a zookeeper-based task load balancing and high-availability system for automatic task migration.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a task load balancing and high availability system based on zookeeper comprises an application starting module, a creating module, a zookeeper cluster module, a task issuing module, a load calculating module, a task processing module, a task ending module, a monitoring module, a received node deleting event module and a task information judging module,
The application starting module is used for running the starting work of the application program and is respectively connected with the creating module and the monitoring module;
The creating module is used for creating a temporary node of the application and the zookeeper cluster and is connected with the zookeeper cluster module;
the zookeeper cluster module is used for storing temporary node information;
the task issuing module is used for creating a task and is connected with the load calculation module;
the load calculation module is used for calculating the application with less task amount and is respectively connected with the zookeeper cluster module and the task processing module;
The task processing module is used for executing tasks and writing node information of the zookeeper cluster, and is respectively connected with the zookeeper cluster module and the task ending module;
The task ending module is used for processing the task state and deleting node information of the zookeeper cluster and is connected with the zookeeper cluster module;
the monitoring module is used for monitoring temporary nodes in the zookeeper cluster and is respectively connected with the zookeeper cluster module and the received node deletion event module;
The received node deleting event module is used for judging the monitoring content and is respectively connected with the monitoring module and the task information judging module;
The task information judging module is used for detecting node information of the zookeeper cluster and determining the subsequent flow, and is respectively connected with the monitoring module and the task issuing module.
The invention discloses a task load balancing and high availability scheme based on a zookeeper, which comprises an application starting module, a creating module, a zookeeper cluster module, a task issuing module, a load calculating module, a task processing module, a task ending module, a monitoring module, a received node deleting event module and a task information judging module.
Preferably, the application starting module is a starting process of the application program, and the module can be executed for a plurality of times, that is, a plurality of application programs are started to form a cluster to achieve the purpose of lateral expansion, and the creating module and the monitoring module are required to be triggered in the starting process.
Preferably, the creating module is triggered in the application starting module and is used for collecting port information of the current machine ip and the current application opening, then creating a temporary node in the zookeeper cluster, wherein the temporary node is a concept in the zookeeper, and the method is characterized in that if the current application is abnormal, the node is automatically deleted; finally, the collected information and other additional identifications are stored in the node.
Preferably, the task issuing module is triggered by the external client and the internal monitoring module and is used for receiving the task information and starting the creation, the task information needs to be subjected to validity check, and then the task information is forwarded to the load computing module for processing.
Preferably, the load calculation module acquires all application information under the node by interacting with the zookeeper cluster, analyzes taskCount marks in the node content, and performs reverse arrangement to acquire the content in the first node, at this time, the application ip and the port to be operated of the task are acquired, and the task information transmitted by the task issuing module is carried to perform interface request forwarding to the task processing module.
Preferably, the task processing module operates in a specific application, after receiving the task information, firstly creates a piece of temporary node information of the task in the zookeeper cluster, then executes real task content, the task content depends on specific service, and carries a task identifier to be processed by the task ending module after the task is completed.
Preferably, the task ending module mainly updates the state of the task in the zookeeper and deletes corresponding information, firstly finds the task node according to the task identifier under the node in the zookeeper, and acquires the status identifier in the node information for updating: switching from 1 to 0 and then deleting the node; the update operation of the node is performed before the deletion because deleting the node triggers the logic of the listening module, which would cause the task to restart if the direct deletion was not modified.
Preferably, the monitoring module is triggered in the application starting module, and after each application is started, one monitoring module is always running, and is mainly used for monitoring a node deletion event under a task directory in the zookeeper cluster, and the monitoring module plays a role in high availability, because nodes under the task directory are all temporary nodes, due to the characteristics of the monitoring module: the temporary node is automatically deleted after the application is abnormal or the machine to which the application belongs is down, so that the logic of a monitoring module of other applications is triggered, and the monitoring trigger result is transmitted to a node deletion event receiving module for screening.
Preferably, in the node deletion event module, after receiving the message of the monitoring module, the type identifier in the information is analyzed, if the deletion event information exists, the task information judging module needs to be executed with the node information data, otherwise, the monitoring is continued.
Preferably, in the task information judging module, according to the node information transmitted by the received node deleting event module, analyzing status identification in the node deleting event module, if 0 indicates that the task node is normally deleted, not processing, and continuing to execute monitoring; if the task node is automatically deleted, namely the task is abnormally exited, if the task node is 1, the task node needs to be forwarded to a task issuing module for restarting to complete the task automatic migration work of the whole system.
The beneficial effects of the invention are as follows: according to the scheme, task load balancing and high availability in the application can be realized under the condition that an external agent program is not used, and tasks in the application or all applications on the machine can be automatically migrated to other applications to operate after the application is abnormal or the machine where the application is located is down in a seamless mode, and infinite lateral expansion of the application is supported.
Drawings
Fig. 1 is a system block diagram of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and detailed description.
In the embodiment shown in fig. 1, a zookeeper-based task load balancing and high availability system comprises an application starting module, a creating module, a zookeeper cluster module, a task issuing module, a load calculating module, a task processing module, a task ending module, a monitoring module, a received node deleting event module and a task information judging module,
The application starting module is used for running the starting work of the application program and is respectively connected with the creating module and the monitoring module; the application starting module is the starting process of the application program, and the module can be executed for a plurality of times, namely, a plurality of application programs are started to form a cluster to achieve the purpose of transverse expansion, and the creating module and the monitoring module are required to be triggered in the starting process, so that the conventional java micro-service starting example:
java-jar xxx.jar
Wherein:
xxx jar is executable package after the application is packaged
The creating module is used for creating a temporary node of the application and the zookeeper cluster and is connected with the zookeeper cluster module; the creating module is triggered in the application starting module and is used for collecting information such as a current machine ip, a current application open port and the like, and then creating a temporary node in the zookeeper cluster, wherein the temporary node is a concept in the zookeeper, and is characterized in that if the current application is abnormal, the node is automatically deleted; and finally, storing the collected information and other additional identifications into the node, wherein the node storage catalog is explained in detail in the zookeeper cluster module. The node storage format of a single application is:
Node path: zaookeeler/server/172.17.230.221:8080
The node content:
{
ip:"172.17.230.221",
port:8080,
taskCount:0
}
Wherein:
ip is the machine ip where the current application is located and is used for issuing and using subsequent tasks
Port is a port opened by the current application and used for subsequent task issuing and use
TaskCount is the number of the currently running tasks of the application, the initial number is 0, the number of the tasks is +1 after the successful issuing of the tasks, the number of the tasks is-1 after the completion of the tasks, and the main identification of the load is realized
The pseudo code for creating the temporary node is:
# zookeeper instance object
curator
# Start creation
.create()
# Simultaneous creation of parent level
.creatingParentsIfNeeded()
# Create a temporary node
.withMode(CreateMode.EPHEMERAL)
# Node name and content
.forPath(“/zookeeper/server/172.17.230.221:8080”,
“{ip:\"172.17.230.221\",port:8080,taskCount:0}”.getBytes("utf-8"))
The zookeeper cluster module is used for storing temporary node information; zookeeper achieves the load and high availability of the whole system by creating temporary nodes inside it, node storage directory planning is as follows:
/
/server
/server1
/server2
/server3
/...
/task
/task1
/task2
/task3
/...
Wherein:
Under the/server node, all started application program information is stored, and server1 is a machine ip+ application port, for example: 172.17.230.221:8080, node content rules are described in [ create Module ]
Task1 is a task identifier (depending on specific service, it is not repeated) and the node content rule is described in [ task processing module ]
The task issuing module is used for creating a task and is connected with the load calculation module; the task issuing module is triggered by the external client and the internal monitoring module and is used for receiving the task information and starting the creation, the task information needs to be subjected to validity check such as whether the task name is repeated, whether the task content is legal, whether a task creator exists or not and the like, and then the task information is forwarded to the load calculation module for processing.
The load calculation module is used for calculating the application with less task amount and is respectively connected with the zookeeper cluster module and the task processing module; the load calculation module acquires all application information under the server node through interaction with the zookeeper cluster, analyzes taskCount marks in node contents, performs reverse order arrangement to acquire the contents in the first node, and at the moment, acquires the application ip and the port to be operated of the task, carries the task information transmitted by the task issuing module, performs interface request and forwards the interface request to the task processing module.
The task processing module is used for executing tasks and writing node information of the zookeeper cluster, and is respectively connected with the zookeeper cluster module and the task ending module; the task processing module operates in a specific application, after receiving task information, a temporary node information of the task is created in the zookeeper cluster, and the node rule of a single task is as follows:
Node path: zookeeper/task/fa79a402c1124a8e8ed86e6d8f3c777d
The node content:
{
server:"172.17.230.221:8080",
id:”fa79a402c1124a8e8ed86e6d8f3c777d”,
status:1
}
Wherein:
fa79a402c1124a8e8ed86e6d8f3c777d is the identity of the task
Server is the name of the application node to which the current task belongs
Id is the identity of the task
Status is the status of the task: 1-run, 0-stop
And executing real task content, wherein the task content depends on specific services, such as executing a video transcoding task, a video screenshot task and the like, and carrying a task identifier after the task is completed and processing the task by a task ending module.
The task ending module is used for processing the task state and deleting node information of the zookeeper cluster and is connected with the zookeeper cluster module; the task ending module is mainly used for updating the state of the task in the zookeeper and deleting corresponding information, and firstly, the task node is found according to the task identifier under the task node in the zookeeper, and the status identifier in the node information is acquired for updating: switch from 1 to 0, then delete the node, delete the pseudo code instance of the node:
# zookeeper instance object
curator
# Initiate deletion
.delete()
# Asynchronous execution
.inBackground()
# Node location
.forPath(“/zookeeper/task/fa79a402c1124a8e8ed86e6d8f3c777d”)
The update operation of the node is performed before the deletion because deleting the node triggers the logic of the listening module, which would cause the task to restart if the direct deletion was not modified.
The monitoring module is used for monitoring temporary nodes in the zookeeper cluster and is respectively connected with the zookeeper cluster module and the received node deletion event module; the monitoring module is triggered in the application starting module, and after each application is started, one monitoring module is always running, and is mainly used for monitoring a node deleting event under a task directory in the zookeeper cluster, and the monitoring module plays a high-availability role because the nodes under the task directory are all temporary nodes due to the characteristics of the nodes: the temporary node is automatically deleted after the application is abnormal or the machine to which the application belongs is down, so that the logic of a monitoring module of other applications is triggered to monitor the pseudo code:
PATHCHILDRENCACHE PATHCHILDRENCACHE = NEW PATHCHILDRENCACHE ("zookeeper service instance", "/task/", true);
The// setup listener and process CACHEIMPL, for implementing PATHCHILDRENCACHELISTENER interface services, may receive intra-add-drop-change notification under the node
pathChildrenCache.getListenable().addListener(cacheImpl,Executors.newCachedThreadPool());
// Start listening
pathChildrenCache.start(StartMode.BUILD_INITIAL_CACHE);
And transmitting the monitoring trigger result to the received node deletion event module for screening.
The received node deleting event module is used for judging the monitoring content and is respectively connected with the monitoring module and the task information judging module; in the received node deleting event module, after receiving the message of the monitoring module, analyzing the type identifier in the information, if deleting event information exists, namely if the deleting event information is child_removed, the deleting event information is considered to be deleting, the task information judging module needs to be carried with the node information data, otherwise, monitoring is continued, and the received message is as follows:
The zookeeper message notification criteria are as follows:
{
type:CHILD_REMOVED,
data:{xxxx}
}
Wherein:
type indicates message type, CHILD is node new, CHILD UPDATED is node update, CHILD delete
Data is the content of the current node, and the specific format is described in the section (task processing module)
The task information judging module is used for detecting node information of the zookeeper cluster and determining the subsequent flow, and is respectively connected with the monitoring module and the task issuing module; in the task information judging module, analyzing status identification according to the node information transmitted by the received node deleting event module, if 0 indicates that the task node is normally deleted, not processing, and continuing to execute monitoring; if the task node is automatically deleted, namely the task is abnormally exited, if the task node is 1, the task node needs to be forwarded to a task issuing module for restarting to complete the task automatic migration work of the whole system.
The invention discloses a task load balancing and high availability scheme based on a zookeeper, which comprises an application starting module, a creating module, a zookeeper cluster module, a task issuing module, a load calculating module, a task processing module, a task ending module, a monitoring module, a received node deleting event module and a task information judging module. The method is applied to live broadcast recording and live broadcast time shifting tasks in the rainbow cloud product, the recording and time shifting tasks can not be interrupted under the condition of server abnormality or program abnormality, and the stability of the whole live broadcast flow is improved.
Claims (3)
1. A task load balancing and high availability system based on zookeeper is characterized by comprising an application starting module, a creating module, a zookeeper cluster module, a task issuing module, a load calculating module, a task processing module, a task ending module, a monitoring module, a received node deleting event module and a task information judging module,
The application starting module is used for running the starting work of the application program and is respectively connected with the creating module and the monitoring module; the application starting module is the starting process of the application program, and can be executed for a plurality of times, namely, a plurality of application programs are started to form a cluster to achieve the purpose of transverse expansion, and the creating module and the monitoring module are required to be triggered in the starting process;
The creating module is used for creating a temporary node of the application and the zookeeper cluster and is connected with the zookeeper cluster module; the creating module is triggered in the application starting module and is used for collecting port information of a current machine ip and a current application opening, and then creating a temporary node in the zookeeper cluster, wherein the temporary node is a concept in the zookeeper, and is characterized in that if the current application is abnormal, the node is automatically deleted; finally, storing the collected information into the node;
the zookeeper cluster module is used for storing temporary node information;
The task issuing module is used for creating a task and is connected with the load calculation module; the task issuing module is triggered by the external client and the internal monitoring module and is used for receiving task information and starting creation, the task information is required to be subjected to validity check, and then the task information is forwarded to the load computing module for processing;
The load calculation module is used for calculating the application with less task amount and is respectively connected with the zookeeper cluster module and the task processing module; the load calculation module acquires all application information under the node through interaction with the zookeeper cluster, analyzes taskCount marks in the node content, performs reverse order arrangement to acquire the content in the first node, and at the moment, acquires the application ip and the port to be operated of the task, carries the task information transmitted by the task issuing module, performs interface request and forwards the interface request to the task processing module;
The task processing module is used for executing tasks and writing node information of the zookeeper cluster, and is respectively connected with the zookeeper cluster module and the task ending module; the task processing module operates in a specific application, after receiving task information, firstly creates a piece of temporary node information of the task in the zookeeper cluster, then executes real task content, the task content depends on specific service, and carries a task identifier to be processed by the task ending module after the task is completed;
The task ending module is used for processing the task state and deleting node information of the zookeeper cluster and is connected with the zookeeper cluster module; the task ending module is mainly used for updating the state of the task in the zookeeper and deleting corresponding information, and firstly, the task node is found according to the task identifier under the node in the zookeeper, and the status identifier in the node information is acquired for updating: switching from 1 to 0 and then deleting the node; the update operation of the node is performed before the deletion because deleting the node triggers the logic of the monitoring module, and if the direct deletion is not modified, the task is restarted;
the monitoring module is used for monitoring temporary nodes in the zookeeper cluster and is respectively connected with the zookeeper cluster module and the received node deletion event module;
The received node deleting event module is used for judging the monitoring content and is respectively connected with the monitoring module and the task information judging module;
The task information judging module is used for detecting node information of the zookeeper cluster and determining the subsequent flow, and is respectively connected with the monitoring module and the task issuing module; in the task information judging module, analyzing status identification according to the node information transmitted by the received node deleting event module, if 0 indicates that the task node is normally deleted, not processing, and continuing to execute monitoring; if the task node is automatically deleted, namely the task is abnormally exited, if the task node is 1, the task node needs to be forwarded to a task issuing module for restarting to complete the task automatic migration work of the whole system.
2. The system for balancing task load and high availability based on zookeeper of claim 1, wherein the monitor module is triggered in the application start module, and each application always has a monitor module running after being started, and is mainly used for monitoring node deletion events under task directory in zookeeper cluster, and the module plays a role of high availability because nodes under task directory are all temporary nodes due to own characteristics: the temporary node is automatically deleted after the application is abnormal or the machine to which the application belongs is down, so that the logic of a monitoring module of other applications is triggered, and the monitoring trigger result is transmitted to a node deletion event receiving module for screening.
3. The system for balancing task load and high availability based on zookeeper of claim 2, wherein in the event module is deleted by the receiving node, after receiving the message of the monitoring module, the type identifier in the information is analyzed, if the event information is deleted, the task information judging module is required to be executed by carrying the node information data, otherwise, the monitoring is continued.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110011599.9A CN112650592B (en) | 2021-01-06 | 2021-01-06 | Task load balancing and high-availability system based on zookeeper |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110011599.9A CN112650592B (en) | 2021-01-06 | 2021-01-06 | Task load balancing and high-availability system based on zookeeper |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112650592A CN112650592A (en) | 2021-04-13 |
CN112650592B true CN112650592B (en) | 2024-04-19 |
Family
ID=75367635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110011599.9A Active CN112650592B (en) | 2021-01-06 | 2021-01-06 | Task load balancing and high-availability system based on zookeeper |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112650592B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113347263B (en) * | 2021-06-11 | 2022-10-11 | 上海中通吉网络技术有限公司 | Message cluster management method and system |
CN113873289B (en) * | 2021-12-06 | 2022-02-18 | 深圳市华曦达科技股份有限公司 | Method for live scheduling of IPTV system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102932210A (en) * | 2012-11-23 | 2013-02-13 | 北京搜狐新媒体信息技术有限公司 | Method and system for monitoring node in PaaS cloud platform |
CN108366086A (en) * | 2017-12-25 | 2018-08-03 | 聚好看科技股份有限公司 | A kind of method and device of control business processing |
CN110264353A (en) * | 2019-05-28 | 2019-09-20 | 必成汇(成都)科技有限公司 | High Availabitity trade match system memory-based and method |
CN111163117A (en) * | 2018-11-07 | 2020-05-15 | 北京京东尚科信息技术有限公司 | Zookeeper-based peer-to-peer scheduling method and device |
CN111522665A (en) * | 2020-04-24 | 2020-08-11 | 北京思特奇信息技术股份有限公司 | Zookeeper-based method for realizing high availability and load balancing of Influxdb-proxy |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11263006B2 (en) * | 2015-11-24 | 2022-03-01 | Vmware, Inc. | Methods and apparatus to deploy workload domains in virtual server racks |
-
2021
- 2021-01-06 CN CN202110011599.9A patent/CN112650592B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102932210A (en) * | 2012-11-23 | 2013-02-13 | 北京搜狐新媒体信息技术有限公司 | Method and system for monitoring node in PaaS cloud platform |
CN108366086A (en) * | 2017-12-25 | 2018-08-03 | 聚好看科技股份有限公司 | A kind of method and device of control business processing |
CN111163117A (en) * | 2018-11-07 | 2020-05-15 | 北京京东尚科信息技术有限公司 | Zookeeper-based peer-to-peer scheduling method and device |
CN110264353A (en) * | 2019-05-28 | 2019-09-20 | 必成汇(成都)科技有限公司 | High Availabitity trade match system memory-based and method |
CN111522665A (en) * | 2020-04-24 | 2020-08-11 | 北京思特奇信息技术股份有限公司 | Zookeeper-based method for realizing high availability and load balancing of Influxdb-proxy |
Non-Patent Citations (3)
Title |
---|
Lipika Bose Goel ; Rana Majumdar.Handling mutual exclusion in a distributed application through Zookeeper.2015 International Conference on Advances in Computer Engineering and Applications.2015,全文. * |
基于Zookeeper的GIS集群实现;苟丽美;张锋叶;林国华;;计算机工程与设计;20170916(09);全文 * |
邓杰 ; 童孟军 ; 胡文泽 ; 林英杰 ; 胡燚.基于Zookeeper构建准实时索引更新系统及其监控.计算机时代.2020,第2.2节. * |
Also Published As
Publication number | Publication date |
---|---|
CN112650592A (en) | 2021-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105653425B (en) | Monitoring system based on complex event processing engine | |
CN112650592B (en) | Task load balancing and high-availability system based on zookeeper | |
CN109286529B (en) | Method and system for recovering RabbitMQ network partition | |
CN111064789B (en) | Data migration method and system | |
US10505881B2 (en) | Generating message envelopes for heterogeneous events | |
CN109656742B (en) | Node exception handling method and device and storage medium | |
US10924326B2 (en) | Method and system for clustered real-time correlation of trace data fragments describing distributed transaction executions | |
EP2795849B1 (en) | Method and apparatus for messaging in the cloud | |
CN112506702B (en) | Disaster recovery method, device, equipment and storage medium for data center | |
CN110650164B (en) | File uploading method and device, terminal and computer storage medium | |
CN110895488B (en) | Task scheduling method and device | |
CN107688489B (en) | Method and system for scheduling tasks | |
CN106657299B (en) | Attention anchor online reminding method and system | |
CN113760652B (en) | Method, system, device and storage medium for full link monitoring based on application | |
CN111064626A (en) | Configuration updating method, device, server and readable storage medium | |
CN112564990B (en) | Management method for switching audio management server | |
CN109725916B (en) | Topology updating system and method for stream processing | |
CN114064217A (en) | Node virtual machine migration method and device based on OpenStack | |
CN112968815B (en) | Method for realizing continuous transmission in broken network | |
WO2024139011A1 (en) | Information processing method | |
CN113434323A (en) | Task flow control method of data center station and related device | |
US11216352B2 (en) | Method for automatically analyzing bottleneck in real time and an apparatus for performing the method | |
US20230280997A1 (en) | Automated process and system update scheduling in a computer network | |
CN112256456B (en) | Session message transmission method and device based on Dubbo service | |
CN113381887B (en) | Method and device for processing faults of computing nodes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |