CN112346835B - Scheduling processing method and system based on coroutine - Google Patents
Scheduling processing method and system based on coroutine Download PDFInfo
- Publication number
- CN112346835B CN112346835B CN202011142092.9A CN202011142092A CN112346835B CN 112346835 B CN112346835 B CN 112346835B CN 202011142092 A CN202011142092 A CN 202011142092A CN 112346835 B CN112346835 B CN 112346835B
- Authority
- CN
- China
- Prior art keywords
- coroutine
- unit
- running
- state
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000008569 process Effects 0.000 claims description 37
- 230000000903 blocking effect Effects 0.000 claims description 33
- 238000011084 recovery Methods 0.000 claims description 6
- 238000002360 preparation method Methods 0.000 claims 10
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/524—Deadlock detection or avoidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45562—Creating, deleting, cloning virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45583—Memory management, e.g. access or allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a scheduling processing method and a system based on coroutine, wherein the scheduling method comprises the following steps: step S1, at least one coroutine is created in a user mode and cached; s2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine; s3, creating at least one virtual processing unit in the user mode, wherein the virtual processing unit is used for running a coroutine to execute corresponding instructions and/or data; and S4, releasing the corresponding processing resources after the coroutine runs in the virtual processing unit. By the technical scheme, the coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and design and application of the coroutine model on platforms such as Java and the like are achieved.
Description
Technical Field
The invention relates to the technical field of user mode scheduling, in particular to a scheduling processing method and system based on coroutine.
Background
The coroutine is a light-weight scheduling unit similar to a thread, supports a pure user mode working mode, and is simple, quick and efficient to schedule; in many application scenarios, the method has stronger attraction compared with the traditional thread scheduling model:
one is that the scheduling efficiency: compared with the process, the thread shares data and codes, and saves a lot of switching work of the memory in scheduling, thereby being more efficient and faster; however, the implementation of the thread still cannot be separated from the support of the kernel (for example, the implementation of the posix thread library adopts a 1:1 model of the kernel thread), and the user mode and the core mode are switched during thread scheduling; the coroutine has all the original advantages of the thread, the biggest advantage is that the coroutine completely belongs to the category of user states, and the switching of the coroutine cannot be sensed by the kernel, so that the heavy switching work of the user-state kernel states is omitted, and the task scheduling efficiency is higher than that of the first floor.
The second is that the locking mechanism: although different threads in the same process share code and data spaces, when critical zone data is operated, a lock mechanism is inevitably required to be added among multiple threads, so that the consistency of the data is ensured, and the execution efficiency of a program is limited to a certain extent by adding the lock mechanism; and because a plurality of coroutines run on the same thread and are automatically controlled and scheduled by the library program, the problem of competition among the coroutines running on the same thread on the access of critical zone data does not exist, thereby greatly improving the running efficiency of the program.
The third is in the processing of IO operation: IO operation is usually used as a blocking event, and needs thread suspension for waiting, which wastes CPU resources, and although asynchronous IO appearing later solves the problem of CPU resource waste to a certain extent, the asynchronous IO operation cannot be completely eliminated; when the coroutine runs in IO operation, the current coroutine is blocked, other coroutines capable of running are dispatched at the same time, the originally blocked coroutine is rescheduled when the IO response is returned, and the resources of the CPU can not be wasted.
The fourth lies in the limitations of the system: no matter which model the traditional thread model is, the corresponding relation of 1:1 is finally needed to be carried out with a basic scheduling unit of a kernel, and the basic scheduling unit is limited by the kernel of an operating system, so that scheduling resources which can be provided for execution on one server are extremely limited; the coroutine belongs to a user function module and is not influenced by the bottom layer of an operating system, so that the coroutine is naturally free from the previous limitation, and more scheduling units can be set as long as the physical memory allows; meanwhile, the coroutine is used as a scheduling unit with smaller granularity under the thread and shares all data of the thread, so that the coroutine data structure only needs a plurality of registers supporting operation, and is much smaller than the thread in the data structure, thereby enabling the coroutine data structure to create much larger quantity than the thread.
The coroutine model greatly improves the performance of the thread model in the above points, and a coroutine-based scheduling processing method and system are urgently needed at present.
Disclosure of Invention
In order to solve the above problems in the prior art, a scheduling processing method and system based on coroutine are provided, and the specific technical scheme is as follows:
a scheduling processing method based on coroutine comprises the following steps:
step S1, at least one coroutine is created in a user mode and cached;
s2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
s3, creating at least one virtual processing unit in the user mode, wherein the virtual processing unit is used for running a coroutine to execute corresponding instructions and/or data;
and S4, releasing the corresponding processing resources after the coroutine runs in the virtual processing unit.
Preferably, in the scheduling processing method, step S3 further includes:
step S31, acquiring the number of runnable co-programs of each virtual processing unit;
and step S32, allocating coroutines to corresponding virtual processing units according to the number of the operable coroutines and forming corresponding operation queues.
Preferably, in the scheduling processing method, step S4 further includes:
step S41, in the operation process of the coroutine, judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
step S42, according to the first judgment result, when the first user instruction exists, selecting a coroutine in the coroutine switching operation process in the operation queue.
Preferably, in the scheduling processing method, step S4 further includes:
s4a, judging whether a blocking event exists or not in the operation process of the coroutine and outputting a corresponding second judgment result;
and S4b, according to the second judgment result, when a blocking event exists, selecting a coroutine in the operation process of coroutine switching in the operation queue and suspending the coroutine in the operation process.
Preferably, in the scheduling processing method, step S4 further includes:
and S4c, according to the second judgment result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
Preferably, the scheduling processing method, wherein each virtual processing unit comprises a plurality of virtual registers;
the switching process comprises the following steps:
step A1, storing memory data in each virtual register into a memory area;
step A2, storing the address data of the coroutine in the running process into a stack data structure;
step A3, restoring the address data to the switching coroutine;
and step A4, restoring the memory data from the memory area to the corresponding virtual register.
A scheduling processing system, applied to any one of the scheduling processing methods, includes:
the creating unit is used for creating at least one coroutine according to an external creating instruction;
the cache unit is connected with the creation unit and used for caching the created coroutine;
and the virtual processing unit is used for operating the coroutine to execute the instruction and/or the data of the program to be operated corresponding to the coroutine.
Preferably, the system for scheduling processing further includes:
and the guide unit is respectively connected with each virtual processing unit and the cache unit, allocates the coroutines with corresponding quantity to the corresponding virtual processing unit according to the runnable coroutines number of each virtual processing unit and forms a corresponding running queue.
Preferably, the scheduling processing system further includes:
the first judging unit is used for judging whether a first user instruction from the outside exists or not and outputting a corresponding first judging result;
and the switching unit is connected with the first judging unit and each virtual processing unit, and selects a coroutine in the coroutine switching operation process in the operation queue according to the first judging result when the first user instruction exists.
Preferably, the scheduling processing system further includes:
the second judgment unit is used for judging whether a blocking event exists or not and outputting a corresponding second judgment result;
the switching unit is also connected with a second judgment unit, and according to the second judgment result, when a blocking event exists, a coroutine in the running process is selected from the running queue to switch the coroutine in the running process and is suspended.
Preferably, the scheduling processing system further includes:
and the recovery unit is respectively connected with the second judgment unit and each virtual processing unit, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
Preferably, the system for scheduling processing further includes:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the coroutine is in the running process in the virtual processing unit, marking the coroutine as a running state;
when the coroutine runs in the virtual processing unit, marking the coroutine as a termination state;
when the protocol is suspended in the switch unit, the protocol is marked as a blocking state.
This technical scheme has following advantage or beneficial effect:
by the technical scheme, a coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and design and application of the coroutine model on platforms such as Java and the like are achieved.
Drawings
Fig. 1 is a schematic flow diagram of a scheduling processing method in a coroutine-based scheduling processing method and system of the present invention.
Fig. 2-5 are schematic diagrams illustrating a scheduling method according to a preferred embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a scheduling processing system in the scheduling processing method and system based on coroutine of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In view of the above problems in the prior art, a scheduling processing method and system based on coroutine are provided, and the specific technical scheme is as follows:
a scheduling processing method based on coroutine, as shown in fig. 1, includes:
step S1, at least one coroutine is created in a user mode and cached;
s2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
s3, creating at least one virtual processing unit in the user mode, wherein the virtual processing unit is used for running a coroutine to execute corresponding instructions and/or data;
and S4, releasing the corresponding processing resources after the coroutine runs in the virtual processing unit.
As a preferred embodiment, as shown in fig. 2, the scheduling processing method further includes, in step S3:
step S31, acquiring the number of runnable co-programs of each virtual processing unit;
and step S32, allocating coroutines to corresponding virtual processing units according to the number of the operable coroutines and forming corresponding operation queues.
In a preferred embodiment of the present invention, the newly created coroutine is cached in a cache region, and needs to be bound with the virtual processing unit before running, and the operable coroutine number of each virtual processing unit is obtained to extract and bind the corresponding coroutine, so as to form a corresponding running queue to wait for scheduling and execution.
As a preferred embodiment, as shown in fig. 3, the scheduling processing method further includes, in step S4:
step S41, in the operation process of the coroutine, judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
step S42, according to the first judgment result, when the first user instruction exists, selecting a coroutine in the coroutine switching operation process in the operation queue.
In another preferred embodiment of the present invention, the user program can give up the control right of the current virtual processing unit through the api interface and give up the resources to other coroutines that really need to be executed; in the above preferred embodiment, a coroutine in the ready state closest to the current time is selected as a scheduling coroutine in the running queue, then the running state of the current coroutine is modified from the running state to the ready state to discard the corresponding processing resource, and finally the instruction and/or data in the program to be executed are transferred to the scheduling coroutine for continuous execution by switching the coroutine stack. Wherein the specific steps related to the coroutine handover will be described in further detail later.
As a preferred embodiment, this scheduling processing method is shown in fig. 4, where step S4 further includes:
s4a, judging whether a blocking event exists or not in the operation process of the coroutine and outputting a corresponding second judgment result;
and S4b, according to the second judgment result, when a blocking event exists, selecting a coroutine in the operation process of coroutine switching in the operation queue and suspending the coroutine in the operation process.
In another preferred embodiment of the present invention, when a blocking event such as IO operation occurs, the blocked current coroutine will give up CPU resources and schedule other coroutines to run; unlike coroutines that forego CPU operation, the state of the current coroutine changes to a blocked state rather than the aforementioned ready state.
As a preferred embodiment, as shown in fig. 4, the scheduling processing method in step S4 further includes:
and S4c, according to the second judgment result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
In the above preferred embodiment, the IO operations executed in the coroutine are all executed asynchronously: after the asynchronous IO operation is completed, the corresponding working thread calls the recovery interface and informs the coroutine framework that the coroutine blocking event corresponding to the coroutine framework is finished, and the subsequent operation can be rescheduled and executed.
In the above preferred embodiment, in a complex multi-threaded environment, it is necessary to ensure that the blocking operation and the recovery operation of the coroutine are executed serially, and the recovery operation of the coroutine must be executed while the coroutine is in the blocking state, and the state of the coroutine needs to be changed to the standby state while the coroutine is recovered.
As a preferred embodiment, the scheduling processing method, wherein each virtual processing unit includes a plurality of virtual registers;
as shown in fig. 5, the handover procedure includes:
step A1, storing memory data in each virtual register into a memory area;
step A2, storing the address data of the coroutine in the running process into a stack data structure;
step A3, restoring the address data to the switching coroutine;
and step A4, restoring the memory data from the memory area to the corresponding virtual register.
In another preferred embodiment of the present invention, a specific scheduling flow of coroutines is explained as follows:
in this embodiment, the coroutine switching is similar to the thread switching performed by the operating system: when the method is applied to a Java platform, coroutine switching is switching of a Java code execution stack (namely a Java stack frame), considering that a Java virtual machine does not provide an access interface for a programmer to the Java stack frame, and based on visibility of a Java local stack and stack consistency, when coroutine switching occurs, a program to be run needs to be artificially guided into a local method, so that all stack information of a previous running method is found, and the specific implementation steps comprise:
1) Storing memory data between a stack top (low address) of a local method and a certain preset specific address (high address) into a data structure corresponding to a coroutine, and simultaneously recording the memory data of a relevant register and doing field storage operation;
2) Copying stack data stored in a data structure of a scheduled coroutine to a stack of a current thread in a high address alignment mode; it is to be noted here that: due to the difference of different coroutine execution logics, the respective stack heights are different; when copying, the position of the preset specific address is invariable, and the specific address needs to always point to the stack bottom of a certain preset specific Java method; and restoring the stored memory data and the field of each register after the copying is finished so as to realize coroutine switching.
In the above preferred embodiment, when applied to a Java platform, the bottom position of a particular Java method needs to be selected as the base address for saving stack data, because a Java thread needs to run the logic common to many Java virtual machines from creation to running of a Java method, and for each coroutine bound to it, these same stack data do not need to be saved; therefore, only the strategy of a set of system calls, i.e. fork/exec, is needed to change the content of a part of stack top addresses, and further the execution logic of the whole thread can be changed.
A scheduling processing system, applied to any one of the scheduling processing methods, as shown in fig. 6, includes:
a creation unit 1 for creating at least one coroutine according to an external creation instruction;
the cache unit 2 is connected with the creating unit 1 and is used for caching the created coroutine;
and the virtual processing unit 3 is used for running the coroutines so as to execute the instructions and/or data of the program to be run corresponding to the coroutines.
As a preferred embodiment, the scheduling processing system further includes:
and the guide unit 4 is respectively connected with each virtual processing unit 3 and the cache unit 2, allocates a corresponding number of coroutines to the corresponding virtual processing unit 3 according to the number of runnable coroutines of each virtual processing unit 3 and forms a corresponding running queue.
As a preferred embodiment, the scheduling processing system further includes:
a first judging unit 5, configured to judge whether a first user instruction from the outside exists and output a corresponding first judgment result;
and the switching unit 6 is connected with the first judging unit 5 and each virtual processing unit 3, and selects a coroutine in the coroutine switching operation process in the operation queue according to the first judgment result when a first user instruction exists.
As a preferred embodiment, the scheduling processing system further includes:
a second judging unit 7, configured to judge whether a blocking event exists and output a corresponding second judgment result;
the switching unit 6 is further connected to a second judging unit 7, and according to a second judgment result, when a blocking event exists, selects a coroutine in the operation process from the operation queue to switch the coroutine in the operation process and suspends the coroutine in the operation process.
As a preferred embodiment, the system for scheduling processing further includes:
and the recovery unit 8 is respectively connected with the second judging unit 7 and each virtual processing unit 3, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
As a preferred embodiment, the scheduling processing system further includes:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the protocol is in the running process in the virtual processing unit, marking the protocol as a running state;
when the coroutine runs in the virtual processing unit, marking the coroutine as a termination state;
when the protocol is suspended in the switch unit, the protocol is marked as a blocking state.
In another preferred embodiment of the present invention, it is noted that scheduling control of coroutines will cause a series of coroutine state changes, and these changes simultaneously constitute the whole coroutine lifecycle; in the above preferred embodiment, the identification unit identifies each state of the coroutine lifecycle, so that the user can more intuitively acquire the schedulable state and the running state of the coroutine.
A specific example is now provided to further explain and explain the present technical solution:
in the foregoing specific embodiment, the technical solution is applied to a Java platform, and provides an environment of a protocol Cheng Ku as a scheduling process, where a specific data structure of the protocol Cheng Ku includes:
areas Cheng Huanchong: all coroutines in the new state are stored in the buffer, and each virtual CPU obtains coroutines needing to be operated from the buffer area when having idle computing power and binds the coroutines to the corresponding virtual CPU to wait for scheduling operation; the coroutine buffer area is composed of a one-dimensional array, and the capacity of the array determines the concurrent bearing capacity of the whole coroutine frame;
a plurality of virtual CPUs: each virtual CPU corresponds to a specific Java thread, the data structure of the virtual CPU only comprises two parts, one part points to a coroutine buffer area to obtain a newly-built coroutine, all the virtual CPUs obtain coroutines needing to be executed from the same coroutine buffer area, the other part points to a coroutine running environment, and the operation of the whole coroutine framework is maintained through an api interface provided by the environment; the virtual CPU obtains a newly-built coroutine from a coroutine buffer area repeatedly after being started and runs a designated code logic of the coroutine buffer area;
coordinating the program operating environment: the state of each virtual CPU operation is shown, and a group of api interfaces which can control a coroutine framework, and the main body of the data structure is 2 task queues: one is an idle queue and the other is a running queue; a newly-built coroutine obtained from a coroutine Cheng Huanchong area is firstly allocated with a task item from an idle list and then put into a running queue, and all coroutines which are running or in a blocking state are stored in the running list; when the coroutine execution is finished, the task items are returned to the idle queue from the running queue again, so that the purpose of resource multiplexing is achieved; the sum of the lengths of the idle queue and the running queue represents the maximum processing capacity of 1 virtual CPU, and under the condition that an idle task item still exists, the virtual CPU can continuously acquire a new coroutine from a coroutine buffer area to run;
virtual CPU context: the context data structure representing the operation of each virtual CPU is defined by C language, and maintains the local parts of all coroutine contexts bound under the virtual CPU and the preset high-order address used when switching the stack;
coroutine context: the system comprises a Java part and a local part, wherein the Java part defines an identifier, a state and execution method entry information of the coroutine, and the information provides necessary data required by coroutine running logic; the local part records the saved stack content and the value of the register at the top of the stack, and the information provides necessary support for protocol switching.
In the foregoing embodiment, before starting scheduling control, it is necessary to preferentially initialize the coroutine framework, which mainly includes initializing the virtual CPU context and a local portion in the coroutine context and starting the virtual CPU, including:
1) Reading preset configuration parameters, wherein the configuration parameters include but are not limited to the number of virtual CPUs, the size of a coroutine buffer area and the coroutine number of concurrent processing of the virtual CPUs;
2) Initializing a virtual CPU context, constructing a corresponding coroutine array and determining a preset high-order address used by stack alignment;
3) Creating a plurality of simulated CPUs according to the configuration parameters, and simultaneously constructing and starting Java threads corresponding to the simulated CPUs;
4) Creating a bootstrap program: the bootstrap coroutine is used as a special built-in coroutine and is used for initializing stacks of other coroutines by using stack data of the bootstrap coroutine, and acquiring a new coroutine from a coroutine buffer area for execution when the virtual CPU is idle; setting the created bootstrap coroutine identifier as 0 and creating a first coroutine on the corresponding virtual CPU;
5) Initializing other protocols Cheng Zhan by using stack data of the bootstrap protocol, finding out all stack data between a preset high-order address in a CPU context and a stack top register, taking the stack data as initial stack data of the bootstrap protocol and assigning the initial stack data to all other protocols;
6) And at the moment, the coroutine framework finishes initialization, guides the coroutine to start running and enters a cycle, and acquires a certain number of newly-built coroutines from the coroutine buffer area in the cycle process and binds the newly-built coroutines to the current virtual CPU.
It should be noted that, in the coroutine framework, when the virtual CPU is turned off, the current coroutine needs to be forcibly switched to the boot coroutine, and the entire Java thread is exited safely by the code logic of the boot coroutine.
In summary, according to the technical scheme, a coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and meanwhile design and application of the coroutine model on platforms such as Java and the like are achieved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Claims (9)
1. A scheduling processing method based on coroutine is characterized in that the scheduling processing method comprises the following steps:
step S1, at least one coroutine is created in a user mode and cached;
s2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
s3, creating at least one virtual processing unit in a user mode, wherein the virtual processing unit is used for running the coroutine to execute the corresponding instruction and/or data;
s4, releasing corresponding processing resources after the coroutine finishes running in the virtual processing unit;
the step S4 further includes:
step S41, in the running process of the coroutine, judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
step S42, according to the first judgment result, when the first user instruction exists, selecting a coroutine in the coroutine switching operation process in an operation queue, specifically, a user program gives up the control right of the virtual processing unit through an api interface, and gives up resources to other coroutines needing to be executed; selecting a coroutine in a preparation state, which is closest to the current time, from the running queue as a scheduling coroutine, and then modifying the running state of the current coroutine into the preparation state from the running state to abandon corresponding processing resources, wherein the preparation state is specifically that when the coroutine is in the running queue, the coroutine is marked as the preparation state;
the step S4 further includes:
s4a, judging whether a blocking event exists or not in the operation process of the coroutine and outputting a corresponding second judgment result;
step S4b, according to the second determination result, when the congestion event exists, selecting a coroutine in the coroutine switching operation process from the operation queue and suspending the coroutine in the operation process, specifically, when the congestion event occurs, the blocked current coroutine will abandon CPU resources and schedule other coroutines to operate, and if the coroutine actively abandons CPU operation, the status of the current coroutine is changed to be a blocking status rather than the preparation status, where the blocking status is specifically that, when the coroutine is suspended in the switching unit, the coroutine is marked as the blocking status;
and S4c, according to the second judgment result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
2. The scheduling processing method of claim 1 wherein said step S3 further comprises:
step S31, acquiring the number of runnable coroutines of each virtual processing unit;
and step S32, distributing the coroutines to the corresponding virtual processing units according to the runnable coroutine number and forming corresponding running queues.
3. The scheduling processing method of claim 1 wherein each of said virtual processing units comprises a plurality of virtual registers;
the switching process comprises the following steps:
step A1, storing memory data in each virtual register into a storage area;
step A2, storing the address data of the coroutine in the running process into a stack data structure;
step A3, restoring the address data to the protocol of the switching;
and A4, restoring the memory data from the storage area to the corresponding virtual register.
4. A scheduling processing system, applied in the scheduling processing method according to any one of claims 1 to 3, comprising:
the creating unit is used for creating at least one coroutine according to an external creating instruction;
the cache unit is connected with the creating unit and used for caching the created coroutine;
and the virtual processing unit is used for operating the coroutine to execute the instruction and/or data of the program to be operated corresponding to the coroutine.
5. The dispatch processing system of claim 4, wherein the dispatch processing system further comprises:
and the guide unit is respectively connected with each virtual processing unit and the cache unit, and distributes a corresponding number of coroutines to the corresponding virtual processing units according to the number of runnable coroutines of each virtual processing unit and forms corresponding running queues.
6. The dispatch processing system of claim 4, wherein the dispatch processing system further comprises:
the first judgment unit is used for judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
a switching unit, connected to the first determining unit and each virtual processing unit, for selecting, according to the first determination result, one of the co-programs in the co-program switching operation process from the operation queue when the first user instruction exists, specifically, a user program gives up the control right of the virtual processing unit through an api interface, and gives up resources to other co-programs to be executed; and selecting a coroutine in a preparation state closest to the current time from the running queue as a scheduling coroutine, and then modifying the state of the current coroutine in running from the running state to the preparation state to abandon corresponding processing resources, wherein the preparation state is specifically that when the coroutine is in the running queue, the coroutine is marked to be in the preparation state.
7. The dispatch processing system of claim 6, wherein the dispatch processing system further comprises:
the second judging unit is used for judging whether a blocking event exists or not and outputting a corresponding second judging result;
the switching unit is further connected to the second determining unit, and according to the second determination result, when the blocking event exists, selects one of the coroutines in the coroutine switching operation process from the operation queue and suspends the coroutine in the operation process, specifically, when the blocking event occurs, the blocked current coroutine will abandon CPU resources and schedule other coroutines to operate, and different from the coroutine actively abandoning CPU operation, the state of the current coroutine is changed to a blocking state instead of the preparation state, where the blocking state is specifically that, when the coroutine is suspended in the switching unit, the coroutine is marked to be the blocking state.
8. The dispatch processing system of claim 7, wherein the dispatch processing system further comprises:
and the recovery unit is respectively connected with the second judging unit and each virtual processing unit, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
9. The dispatch processing system of claim 7, wherein the dispatch processing system further comprises:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the coroutine is in the running process in the virtual processing unit, marking the coroutine as a running state;
when the coroutine runs in the virtual processing unit completely, marking the coroutine as a termination state;
when the protocol is suspended in the switching unit, marking the protocol as a blocking state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011142092.9A CN112346835B (en) | 2020-10-22 | 2020-10-22 | Scheduling processing method and system based on coroutine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011142092.9A CN112346835B (en) | 2020-10-22 | 2020-10-22 | Scheduling processing method and system based on coroutine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112346835A CN112346835A (en) | 2021-02-09 |
CN112346835B true CN112346835B (en) | 2022-12-09 |
Family
ID=74359860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011142092.9A Active CN112346835B (en) | 2020-10-22 | 2020-10-22 | Scheduling processing method and system based on coroutine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112346835B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114466151B (en) * | 2022-04-11 | 2022-07-12 | 武汉中科通达高新技术股份有限公司 | Video storage system, computer equipment and storage medium of national standard camera |
CN116155686B (en) * | 2023-01-30 | 2024-05-31 | 浪潮云信息技术股份公司 | Method for judging node faults in cloud environment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6732138B1 (en) * | 1995-07-26 | 2004-05-04 | International Business Machines Corporation | Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process |
CN104142858A (en) * | 2013-11-29 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Blocked task scheduling method and device |
CN105760237A (en) * | 2016-02-05 | 2016-07-13 | 南京贝伦思网络科技股份有限公司 | Communication method based on coroutine mechanism |
CN107992344A (en) * | 2016-10-25 | 2018-05-04 | 腾讯科技(深圳)有限公司 | One kind association's journey implementation method and device |
CN108021449A (en) * | 2017-12-01 | 2018-05-11 | 厦门安胜网络科技有限公司 | One kind association journey implementation method, terminal device and storage medium |
CN111767159A (en) * | 2020-06-24 | 2020-10-13 | 浙江大学 | Asynchronous system calling system based on coroutine |
-
2020
- 2020-10-22 CN CN202011142092.9A patent/CN112346835B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6732138B1 (en) * | 1995-07-26 | 2004-05-04 | International Business Machines Corporation | Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process |
CN104142858A (en) * | 2013-11-29 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Blocked task scheduling method and device |
CN105760237A (en) * | 2016-02-05 | 2016-07-13 | 南京贝伦思网络科技股份有限公司 | Communication method based on coroutine mechanism |
CN107992344A (en) * | 2016-10-25 | 2018-05-04 | 腾讯科技(深圳)有限公司 | One kind association's journey implementation method and device |
CN108021449A (en) * | 2017-12-01 | 2018-05-11 | 厦门安胜网络科技有限公司 | One kind association journey implementation method, terminal device and storage medium |
CN111767159A (en) * | 2020-06-24 | 2020-10-13 | 浙江大学 | Asynchronous system calling system based on coroutine |
Non-Patent Citations (1)
Title |
---|
基于FPGA的虚拟平台硬件仿真加速单元的设计;吴楠;《中国优秀博硕士论文学位论文全文数据库(硕士) 信息科技辑》;20190115;文章第4章 * |
Also Published As
Publication number | Publication date |
---|---|
CN112346835A (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0533805B1 (en) | Method for efficient non-virtual main memory management | |
US6948172B1 (en) | Preemptive multi-tasking with cooperative groups of tasks | |
JP5678135B2 (en) | A mechanism for scheduling threads on an OS isolation sequencer without operating system intervention | |
CN109144710B (en) | Resource scheduling method, device and computer readable storage medium | |
CN101727351B (en) | Multicore platform-orientated asymmetrical dispatcher for monitor of virtual machine and dispatching method thereof | |
US6006247A (en) | Method and system for scheduling threads and handling exceptions within a multiprocessor data processing system | |
US20040199927A1 (en) | Enhanced runtime hosting | |
CN100449478C (en) | Method and apparatus for real-time multithreading | |
US20080201561A1 (en) | Multi-threaded parallel processor methods and apparatus | |
US20160085601A1 (en) | Transparent user mode scheduling on traditional threading systems | |
KR102334511B1 (en) | Manage task dependencies | |
CN110597606B (en) | Cache-friendly user-level thread scheduling method | |
US20070198628A1 (en) | Cell processor methods and apparatus | |
WO2007067562A2 (en) | Methods and apparatus for multi-core processing with dedicated thread management | |
AU2001297946A1 (en) | Computer multi-tasking via virtual threading | |
EP1364284A2 (en) | Computer multi-tasking via virtual threading | |
CN112346835B (en) | Scheduling processing method and system based on coroutine | |
CN111158855B (en) | Lightweight virtual clipping method based on micro-container and cloud function | |
JP3810735B2 (en) | An efficient thread-local object allocation method for scalable memory | |
WO2014110702A1 (en) | Cooperative concurrent message bus, driving member assembly model and member disassembly method | |
WO2005048009A2 (en) | Method and system for multithreaded processing using errands | |
CN109656868B (en) | Memory data transfer method between CPU and GPU | |
JP2007280397A (en) | Method for loading program by computer system including a plurality of processing nodes, computer readable medium including program, and parallel computer system | |
CN112230901A (en) | Network programming framework system and method based on asynchronous IO model | |
CN111736998A (en) | Memory management method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231018 Address after: Room A320, 3rd Floor, No. 1359 Zhonghua Road, Huangpu District, Shanghai, 200010 Patentee after: Shanghai Jiaran Information Technology Co.,Ltd. Address before: 200001 4th floor, Fengsheng building, 763 Mengzi Road, Huangpu District, Shanghai Patentee before: SHANGHAI HANDPAL INFORMATION TECHNOLOGY SERVICE Co.,Ltd. |
|
TR01 | Transfer of patent right |