CN110955503B - Task scheduling method and device - Google Patents
Task scheduling method and device Download PDFInfo
- Publication number
- CN110955503B CN110955503B CN201811130901.7A CN201811130901A CN110955503B CN 110955503 B CN110955503 B CN 110955503B CN 201811130901 A CN201811130901 A CN 201811130901A CN 110955503 B CN110955503 B CN 110955503B
- Authority
- CN
- China
- Prior art keywords
- task
- code data
- tasks
- coroutine
- execution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000006870 function Effects 0.000 claims description 84
- 230000000903 blocking effect Effects 0.000 claims description 37
- 238000002360 preparation method Methods 0.000 claims description 31
- 238000004458 analytical method Methods 0.000 claims description 27
- 230000003068 static effect Effects 0.000 claims description 25
- 238000012544 monitoring process Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Devices For Executing Special Programs (AREA)
Abstract
The disclosure discloses a task scheduling method and device, comprising the following steps: acquiring co-thread code data obtained by converting a program grammar of multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel in a multi-thread mode; adding a plurality of tasks to a scheduler; and controlling the switching execution of the plurality of tasks in the single thread according to the coroutine code data scheduling by a scheduler. The multithread code data is converted into the cooperative code data through the program grammar, and the scheduler is utilized to schedule and control the switching execution of a plurality of tasks in a single thread according to the cooperative code data, so that the multithread code data is operated in the single thread, and the difficulty in operating a universal programming language in a browser in the prior art is solved.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a task scheduling method and device.
Background
Browsers have become an important programming platform. In the prior art, code data written in a JavaScript language can be directly run in a browser, so that in order to realize running of code data written in various programming languages in the browser, the code data written in other programming languages can be compiled into code data in the JavaScript language and then run in the browser.
However, the execution of code data in JavaScript language in the browser is single-threaded. If the code written in other programming languages does not support running in a single threaded manner, such as multi-threaded code data written using Python, then the multi-threaded code data after compilation into JavaScript language is also not able to run in the browser. Thus multi-threaded code data written with Python cannot run in the browser. Code data written in many common programming languages in the prior art, such as C, C ++, python, are multi-threaded code data, so that running the common programming language in a browser is limited.
From the above, it can be seen that the problem of how to run multi-threaded code data in a single thread remains to be solved.
Disclosure of Invention
In order to solve the problems in the related art, the present disclosure provides a task scheduling method and apparatus.
A task scheduling method, comprising:
acquiring co-thread code data obtained by converting a program grammar of multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel in a multi-thread mode;
adding the plurality of tasks to a scheduler;
and controlling the switching execution of the tasks in the single thread according to the coroutine code data scheduling by the scheduler.
A task scheduling device, comprising:
an acquisition module configured to perform: acquiring co-thread code data obtained by converting a program grammar of multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel in a multi-thread mode;
an add module configured to perform: adding the plurality of tasks to a scheduler;
a scheduling control module configured to perform: and controlling the switching execution of the tasks in the single thread according to the coroutine code data scheduling by the scheduler.
In an exemplary embodiment, the multithreaded code data includes task functions for multithreaded parallel execution of a plurality of tasks, the task scheduling device further comprising:
a static analysis module configured to perform: performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis;
the coroutine keyword adding module is configured to execute: adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the coroutine code data runs in the single thread, the scheduler switches the task corresponding to the task function among different task queues in the single thread through the coroutine key word, so as to realize the switching execution of the tasks.
In an exemplary embodiment, the task queues include an execution queue storing currently executed tasks, a blocking queue storing blocked tasks, and a preparation queue storing tasks to be executed, and the scheduling control module includes:
a congestion judging unit configured to perform: judging whether a task in the execution queue is blocked or not according to the coroutine keyword in the process that coroutine code data run in the single thread;
a schedule switching unit configured to perform: if the task in the execution queue is blocked, suspending executing the blocked task in the execution queue, moving the blocked task to the blocking queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the apparatus further comprises:
a first monitoring module configured to perform: monitoring tasks in the blocking queue;
a first transfer module configured to perform: if the blocking condition causing the task in the blocking queue to be blocked is monitored to be eliminated, the task in the blocking queue is moved to the preparation queue through the scheduler to wait for the execution of the blocked task to resume.
In an exemplary embodiment, the apparatus further comprises:
an allocation module configured to perform: and distributing the tasks which are initially executed in the tasks to the execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to the preparation queue.
In an exemplary embodiment, the scheduling control module further includes:
a monitoring unit configured to perform: performing execution state monitoring on tasks in the execution queue;
a transfer unit configured to perform: and if the task in the execution queue is monitored to be completed, removing the task with completed execution from the execution queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the apparatus further comprises:
a compilation module configured to perform: compiling the coroutine code data to meet the program language requirement of the single-thread running environment on the running code data;
the scheduling control module comprises:
A scheduling control unit configured to perform: and scheduling and controlling the switching execution of the tasks in the single thread of the running environment according to the compiled coroutine code data by the scheduler.
A task scheduling device, comprising:
a processor; and
A memory having stored thereon computer readable instructions which when executed by the processor implement a task scheduling method as described above.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements a task scheduling method as described above.
In the technical scheme of the disclosure, the multithreaded code data is converted into the cooperative code data through the program grammar, and the scheduler is utilized to schedule and control the switching execution of a plurality of tasks in a single thread according to the cooperative code data, so that the multithreaded code data is operated in the single thread, and the difficulty in operating a general programming language in a browser in the prior art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment to which the present disclosure relates;
FIG. 2 is a block diagram of a server shown in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of task scheduling, according to an example embodiment;
FIG. 4 is a flowchart illustrating steps prior to step S110 in the embodiment of FIG. 3;
FIG. 5 is a flowchart of step S150 of the corresponding embodiment of FIG. 3;
FIG. 6 is a flowchart illustrating steps subsequent to step S151 of the corresponding embodiment of FIG. 5, according to an exemplary embodiment;
fig. 7 is a flowchart illustrating step S150 of the corresponding embodiment of fig. 3 according to another exemplary embodiment;
FIG. 8 is a flowchart illustrating a task scheduling method according to another exemplary embodiment;
FIG. 9 is a block diagram of a task scheduler shown according to an example embodiment;
fig. 10 is a block diagram of a task scheduling device shown according to another exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
FIG. 1 is a schematic diagram illustrating an implementation environment in accordance with an exemplary implementation. The implementation environment comprises: a server 200 and at least one terminal 100 (only two are shown in fig. 1).
The terminal 100 may be an electronic device such as a laptop, a desktop, or a smart phone, which may run an application client. The server 200 is a server corresponding to a client running on the terminal 100, and is configured to exchange data with an application client running on the terminal 100, so as to provide a corresponding service for the application client, for example, code data is written on the client of the terminal, the server performs switching operation of multiple tasks in the code data according to the technical scheme of the present disclosure on the code data written in the terminal, and returns an operation result to the terminal 100, or returns an operation process to the terminal 100 in an operation process, where the operation process is displayed to a user by the terminal 100.
The terminal 100 and the server 200 establish network connection to realize communication, and the association mode between the terminal 100 and the server 200 comprises a network association mode and/or a protocol of hardware and a data association mode between the terminal and the server.
Fig. 2 is a block diagram illustrating a server 200, which server 200 may be used to implement task scheduling in accordance with the methods of the present disclosure, according to an exemplary embodiment.
It should be noted that this server is only one example adapted to the present disclosure, and should not be construed as providing any limitation on the scope of use of the present disclosure. Nor should the server be construed as necessarily relying on or necessarily having one or more of the components of the exemplary server 200 shown in fig. 2.
The hardware structure of the server may vary widely depending on the configuration or performance, as shown in fig. 2, the server 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Wherein, the power supply 210 is used for providing an operating voltage for each hardware device on the server 200.
The interface 230 includes at least one wired or wireless network interface 231, at least one serial-to-parallel interface 233, at least one input-output interface 235, and at least one USB interface 237, etc., for communicating with external devices.
The memory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include an operating system 251, application programs 253, and data 255, and the storage mode may be transient storage or permanent storage. The operating system 251 is used for managing and controlling various hardware devices and application programs 253 on the server 200, so as to implement calculation and processing of the mass data 255 by the central processor 270, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc. The application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may respectively include a series of computer readable instructions for the server 200. The data 255 may be code data stored in a disk, or the like.
The central processor 270 may include one or more of the above processors and is configured to communicate with the memory 250 via a bus for computing and processing the mass data 255 in the memory 250.
As described in detail above, the server 200 to which the present disclosure is applied will complete the task scheduling method in the form of a series of computer readable instructions stored in the memory 250 by the central processor 270.
In an exemplary embodiment, the server 200 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for executing the methods described below. Thus, implementation of the present invention is not limited to any specific hardware circuitry, software, or combination of the two.
FIG. 3 is a flowchart illustrating a method of task scheduling according to an exemplary embodiment. The task scheduling method is used in the terminal 100 of the implementation environment shown in fig. 1. As shown in fig. 3, the task scheduling method includes:
step S110, obtaining the co-program code data obtained by converting the program grammar of the multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel by multiple threads.
Before a specific discussion, the differences between the three are explained by performing multiple tasks in a coroutine, multithreading, and single threaded manner, respectively:
for example, the tasks to be executed are task a and task B, and in an ideal state, both tasks are arithmetic operations, and there are no problems of competition and data sharing.
When the two tasks are executed in a multithreading manner, the task A is executed by the thread A, the task B is executed by the thread B, and therefore parallel execution of the task A and the task B is realized.
When the two tasks are executed in a single thread manner, for example, the task a is executed first and then the task B is executed in accordance with the task execution order set in the code data, and then the task B can be executed after the task a is executed in the execution process.
When the two tasks are executed in a coordinated manner, the task A can be executed first and then the task B can be executed, wherein if the task A is blocked, the execution of the task A can be suspended, the task A is switched to the task B to be executed first, and then the task A is resumed to be executed. It can also be considered as a yield between coroutines, i.e., coroutine A performs task A and coroutine B performs task B, but only one task in coroutine is performed at a time.
From the above, in multithreading, multiple tasks may be performed in parallel (i.e., multiple tasks may be performed at the same time). In a single thread, multiple tasks are executed sequentially, i.e., only one task can be executed at a time, and the execution of the next task can be started after the previous task is completed. In coroutine, only one task is executing at the same time, but the execution sequence among the tasks can be switched.
In addition, it should be noted that, in multithreading, the threads may also be switched, for example, two tasks need to be executed C, D in the thread I, and the task that needs to be processed is not temporarily executed in the thread II, so that in the execution process, the task in the thread I, for example, the task C, may be switched to be executed in the thread II, that is, the task D is executed by the thread I, and the task C is executed by the thread II. The scheduling among threads is realized by an operating system, and the scheduling of tasks in the coroutines is performed by corresponding code data, for example, the task yielding is performed by using the coroutine keywords such as yields in the coroutine code data. Thus the cost of switching between threads is greater than the cost of coroutine switching, and the speed of thread switching is slower than the speed of coroutine switching.
The switch execution of the multiple tasks in the multithreaded code data in the single thread realized by the technical scheme of the disclosure is to switch in a cooperative way, namely, the switch is realized by the code data.
The multithreaded code data refers to code data running in a multithreaded mode, wherein whether the code data is a programming language of the code data is limited by the code data, for example, if a corresponding thread library for multithreaded execution of the code data is provided in a function library of the programming language to support multithreaded operation and application of the code data, code compiled by the thread library in the programming language is the multithreaded code data, so that the multithreaded code data can run in a multithreaded mode in an environment supporting the running of the programming language. C. Programming languages such as c++, python, etc. support multi-threaded operation, and specific programming languages for multi-threaded code data are not limited herein.
Coroutine code data refers to code data that is run in a coroutine manner, that is, as mentioned above, when run in a coroutine manner, a plurality of tasks to be executed in the coroutine code data may be switched to be executed, but only one task is executed at a time. Similarly, the coroutine code data is also limited by the programming language, i.e. whether the programming language provides coroutine related functions that the code data executes in a coroutine manner, such as the coroutine keywords mentioned below, etc., so that the code data written by the coroutine related functions in the programming language is coroutine code data. In the technical scheme of the disclosure, the multithreaded code data is converted into the cooperative code data by performing static analysis on the multithreaded code data and adding corresponding cooperative keywords in the task function.
What way (i.e., single-threaded, multi-threaded, and co-threaded) code data is run by different programming languages is limited by, on the one hand, the actual running environment of the code data, i.e., whether the actual running environment performs the corresponding running mode, and on the other hand, the programming language, i.e., whether the programming language provides a library of functions and interfaces of the corresponding running mode, etc. For example, code data written in JavaScript can only run in a single thread mode when running in a browser, code data written in a programming language such as Python, C, C ++ can support multi-thread mode and also support co-thread mode, for example, code data written in a multi-thread mode by Python and multi-thread related functions provided by Python can support multi-thread mode and code data written in a co-thread related functions supported by Python and Python can support co-thread mode.
Of course, after the multi-thread code data is subjected to program grammar conversion to obtain the co-thread code data, tasks executed by the co-thread code data during running are also a plurality of tasks to be executed by the corresponding multi-thread code data.
In a specific embodiment, the multithreaded code data may be written on a client of the terminal, and the server obtains the multithreaded code data from the terminal and performs program grammar conversion on the multithreaded code data to obtain the cooperative code data, so that the server directly executes the obtained cooperative code data according to the task scheduling method of the present disclosure. Of course, the co-program code data obtained by performing the program grammar conversion on the multi-thread code data may also be directly stored in the server, so that the server may directly extract the co-program code data and execute the co-program code data according to the task scheduling method of the present disclosure.
In step S130, a plurality of tasks are added to the scheduler.
The scheduler is used for scheduling the code data to execute a plurality of tasks in the code data in a co-program manner, i.e. the scheduler is a co-program scheduler supporting the execution of a plurality of tasks in a co-program manner. In a programming language, if the co-program related function is provided by the programming language for a user to write co-program code data, a co-program scheduler is also provided in the programming language that co-program executes a plurality of tasks in the co-program code data.
The adding of the plurality of tasks to the scheduler, that is, adding the task functions corresponding to the plurality of tasks to the coroutine scheduler, so that the scheduler may schedule the plurality of tasks according to the step of step S150.
In one embodiment, if the programming language used by the multithreaded code data does not support writing the code data of the co-thread, the multithreaded code data needs to be compiled into other languages supporting writing the code data of the co-thread, that is, the multithreaded code data is compiled into the multithreaded code data of another programming language, and then the program grammar conversion is performed according to the compiled multithreaded code data to obtain the co-thread code data. Thus, in step S130, a plurality of tasks to be performed by the multi-threaded code data are added to a co-program scheduler provided by a programming language supporting co-program.
Step S150, the scheduler controls the switching execution of a plurality of tasks in a single thread according to the coroutine code data schedule.
That is, in step S150, although the co-program code data is executed in a single thread, the scheduler may schedule according to the co-program code data during execution, so as to implement the switching execution of a plurality of tasks in the single thread. The switching execution of the plurality of tasks in step S150 is described in detail below.
In the technical scheme of the disclosure, the multithreaded code data is converted into the cooperative code data through the program grammar, and the scheduler is utilized to schedule and control the switching execution of a plurality of tasks in a single thread according to the cooperative code data, so that the multithreaded code data is operated in the single thread, and the difficulty in operating a general programming language in a browser in the prior art is solved.
The present disclosure may be applied to programming software, particularly programming software that has a browser as a platform. Therefore, after the multi-thread code data is written according to the general programming language, the technical scheme of the present disclosure can be utilized to run the multi-thread code data in a single thread, for example, the multi-thread code data is subjected to program grammar conversion to obtain the co-thread code data, and the co-thread code data is compiled into a JavaScript language, so that the operation in a browser is realized.
In an exemplary embodiment, the multithreaded code data includes task functions for multithreading to execute multiple tasks in parallel, as shown in FIG. 4, and further includes, prior to step S110:
and step S011, performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through static analysis.
Static analysis refers to scanning the multi-thread code data through the technologies of lexical analysis, grammar analysis, control flow, data flow and the like under the condition that the multi-thread code data is not operated, so that the type corresponding to each task function in the multi-thread code data is determined.
The static analysis of the multi-threaded code data may be performed using static analysis tools such as Findbugs, PMD, checkstyle, blueMorpho, klocwork, LDRA Testbed, HP Fortify, pararoft C++ Test, etc., and is not specifically limited herein. Of course, the static analysis tool has requirements on the programming language of the code data, for example, the Pararoft C++ Test tool supports the static analysis of the C, C ++ language code data, and then the static analysis of the code data by using the Pararoft C++ Test tool cannot be performed by using the code data written by Python. The static analysis tool used is therefore dependent on the programming language used for the coroutine data.
In the running process of the code data, the execution of the task is to execute the task function corresponding to the task, and of course, the task function may include one or more sub-functions, where the one or more sub-functions form the task function.
By means of static analysis, whether the task corresponding to a certain task function is likely to be blocked or not can be judged. The task blocking may be caused by deferred execution of a task function, for example, a task function including a time. Sleep () deferred execution function, or may be a task function that needs to read data from other devices or files through an I/O interface, or a task function that needs to write data to other devices or files through an I/O interface, so that the task execution time corresponding to the task function is long, and then the task function is the blocking function.
Step S013, adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the co-program code data runs in a single thread, the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the co-program keywords, so that switching execution of a plurality of tasks is realized.
The added coroutine keywords depend on the programming language and the type of the task function, for example, yield, await and the like are supported in the python language, and yield, await and the like are added according to the type of the corresponding task function when the coroutine keywords are added, which is not particularly limited herein.
For example, in the multithreaded code data, the execution of the calling thread is deferred through a time. Sleep (1000), the function is a deferred execution function, then the task function containing the function is a deferred execution function, where the numeral 1000 in brackets indicates a deferred time, and after performing static analysis to determine that the task function where the function is located is a deferred execution function, a corresponding co-program keyword await is added to obtain the co-program code data: and waiting (1000), so that when the coroutine data runs to the task function where the function is located, the task corresponding to the task function is considered to be a task which is possibly blocked, and a task queue of the task corresponding to the task function is transferred to execute another task.
For another example, if it is determined by static analysis that the function block () in the multithreaded code data is a blocking function, a corresponding coroutine keyword yieldis added to the function to obtain coroutine code data: yield block (). If another function call_block is called to the blocking function block (), the function call_block is also regarded as a blocking function, and a corresponding coroutine keyword yieldis added to obtain coroutine code data: yield call_block. Therefore, in the process of executing the coroutine code data, if the coroutine keyword is operated to judge the task corresponding to the task function where the coroutine keyword is positioned, the task is regarded as blocked, the operation of the task is suspended, and other tasks are switched to be executed.
The supported co-program keywords are also different in different programming languages, for example, in the Python language, the yield and await keywords are supported, and meanwhile, the co-program keywords added by different types of task functions are also different, for example, the await co-program keywords are added for deferred execution functions, and the yield co-program keywords are added for blocking functions. Of course, in a specific programming language, there are other types of task functions and corresponding addable coroutine keywords, which are not specifically limited herein. The foregoing is merely exemplary and is not to be construed as limiting the scope of the present disclosure in its application.
In an exemplary embodiment, the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, and as shown in fig. 5, step S150 includes:
in step S151, during the process of running the co-program code data in a single thread, it is determined whether the task located in the execution queue is blocked according to the co-program key.
In step S153, if the task in the execution queue is blocked, execution of the blocked task in the execution queue is suspended, the blocked task is moved to the blocking queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In the running process of the coroutine code data, if the task is run to the task corresponding to the task function containing the coroutine key words, the task is considered to be blocked, so that the task is moved from the execution queue to the blocking queue through the scheduler, and the task waiting to be executed in the preparation queue is moved to the execution queue, thereby executing the task moved to the execution queue, and realizing the switching execution of a plurality of tasks.
Wherein, the task in the preparation queue can be one or a plurality of tasks. In a specific embodiment, in order of placing tasks in the preparation queue, the tasks are fetched from the preparation queue and transferred to the execution queue to execute the tasks, i.e. the tasks are fetched from the preparation queue and transferred to the execution queue to execute the tasks in a first-in first-out order. In another embodiment, the tasks in the preparation queue are fetched according to the priorities of the tasks in the preparation queue, that is, the task with the high priority is fetched first and transferred to the execution queue to execute the task with the high priority.
In an exemplary embodiment, as shown in fig. 6, after step S151, the method further includes:
Step S161, monitoring the tasks located in the blocking queue.
In step S162, if it is monitored that the blocking condition causing the blocking of the task located in the blocking queue is eliminated, the task located in the blocking queue is moved to the preparation queue by the scheduler to wait for resuming the execution of the blocked task.
After the blocked task is moved to the blocked queue, if the condition causing the task to be blocked is eliminated, the task is moved from the blocked queue to the ready queue again to wait for the task to be resumed. The reason why a task is blocked may be that a task calls another task during execution, so that the scheduler moves the task to the blocking queue and moves the called task to the execution queue to execute the called task. Then, when the execution of the called task is completed or an exception occurs, the condition that is considered to cause the task to be blocked is eliminated, so that the task is moved from the blocking queue to the preparation queue again to wait for the recovery execution of the task.
The following is an example of the switching of tasks corresponding to task functions including the yield coroutine key between different task queues:
During single-threaded execution of the coroutine code data, the function containing the yield coroutine key suspends the generator execution, and if the function containing the yield coroutine key is executed, the execution is suspended. The value of the expression following the yield co-range key is returned to the caller of the producer. Wherein the yield coroutine key actually returns an iterateresult object with two attributes: value and done, where the value attribute is the result of evaluating the expression where the yield co-range key is located, and done indicates that the producer function has not yet been executed to completion.
In the execution process, once the yfield cooperative key word is encountered, the function containing the yfield cooperative key word is suspended, and the scheduler moves the task corresponding to the task function where the function is located to the blocking queue and executes the task moved from the preparation queue to the execution queue.
When next () of the generator is called, the function that was suspended will resume execution. The next () of the generator is called, that is, after the condition that causes the task to be blocked is eliminated, the blocked task is moved from the preparation queue to the execution queue again. Of course, whether next () of the generator is called depends on whether the condition causing the task blocking is eliminated.
In an exemplary embodiment, as shown in fig. 7, step S150 includes:
step S251, performing execution status monitoring on the tasks located in the execution queue.
In step S252, if it is monitored that the task in the execution queue is completed, the task whose execution is completed is removed from the execution queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
And removing the task from the execution queue for executing the completed task in the execution queue, and further executing other tasks to be executed in the preparation queue.
In an exemplary embodiment, before step S151, the method further includes:
and distributing the tasks which are initially executed in the tasks to an execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, as shown in fig. 8, step S150 further includes:
and compiling the coroutine code data to meet the program language requirement of the running environment where the single thread is located on the running code data.
Step S150 includes:
step S250 is to schedule and control the switching execution of a plurality of tasks in a single thread of the running environment according to the compiled coroutine code data through a scheduler.
For example, as mentioned above, in order to implement running code data in a browser, the code data needs to be compiled into code data in JavaScript language. Thus, in order to meet the programming language requirements of the co-program code data in the actual running environment, the co-program code data needs to be compiled to realize the running of the co-program code data in the actual single-threaded environment.
By compiling the coroutine code data, the operation of the code data of various program languages can be realized in an operation environment, and the application range is wide.
The following is an embodiment of the apparatus of the present disclosure, which may be used to execute the task scheduling method embodiment executed by the server 200 of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to an embodiment of a task scheduling method of the present disclosure.
Fig. 9 is a block diagram illustrating a task scheduling device that may be used in the server 200 of the implementation environment shown in fig. 1 to perform all or part of the steps of the task scheduling method shown in any of the above method embodiments, according to an exemplary embodiment. As shown in fig. 9, the task scheduling device includes:
an acquisition module 110 configured to perform: and acquiring the co-program code data obtained by converting the program grammar of the multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel by multiple threads.
An adding module 130, which is connected to the obtaining module 110 and configured to perform: a plurality of tasks is added to the scheduler.
A schedule control module 150, coupled to the adding module 130, configured to perform: and controlling the switching execution of the plurality of tasks in the single thread according to the coroutine code data scheduling by a scheduler.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the task scheduling method, and will not be described herein.
In an exemplary embodiment, the multithreaded code data includes task functions for multithreading in parallel execution of a plurality of tasks, the task scheduling device further comprising:
a static analysis module configured to perform: and carrying out static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through static analysis.
The coroutine keyword adding module is configured to execute: adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the co-program code data runs in a single thread, the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the co-program keywords, so that switching execution of a plurality of tasks is realized.
After the static analysis module and the coroutine keyword adding module execute the corresponding operations, the obtaining module 110 may directly obtain the corresponding coroutine code data.
In an exemplary embodiment, the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, and the scheduling control module includes:
a congestion judging unit configured to perform: during the process of the co-thread code data running on a single thread, whether the task in the execution queue is blocked is judged according to the co-thread key word.
A schedule switching unit configured to perform: if the task in the execution queue is blocked, suspending executing the blocked task in the execution queue, moving the blocked task to the blocking queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the task scheduling device further includes:
a first monitoring module configured to perform: tasks located in the blocking queue are monitored.
A first transfer module configured to perform: if the blocking condition causing the blocking of the task located in the blocking queue is monitored to be eliminated, the task located in the blocking queue is moved to a preparation queue by a scheduler to wait for resuming the execution of the blocked task.
In an exemplary embodiment, the task scheduling device further includes:
an allocation module configured to perform: and distributing the tasks which are initially executed in the tasks to an execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, the scheduling control module further includes:
a monitoring unit configured to perform: and performing execution state monitoring on the tasks in the execution queue.
A transfer unit configured to perform: if the task in the execution queue is monitored to be completed, the task in the execution queue after completion of execution is removed from the execution queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the task scheduling device further includes:
a compilation module configured to perform: and compiling the coroutine code data to meet the program language requirement of the running environment where the single thread is located on the running code data.
The scheduling control module comprises:
a scheduling control unit configured to perform: and scheduling and controlling the switching execution of a plurality of tasks in a single thread of the running environment according to the compiled coroutine code data through a scheduler.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the task scheduling method, and will not be described herein.
It is to be understood that these modules may be implemented in hardware, software, or a combination of both. When implemented in hardware, these modules may be implemented as one or more hardware modules, such as one or more application specific integrated circuits. When implemented in software, the modules may be implemented as one or more computer programs executing on one or more processors, such as a program stored in memory 250 executed by central processor 270 of fig. 2.
The present disclosure also provides a task scheduling device, as shown in fig. 10, which may be used in the server 200 of the implementation environment shown in fig. 1, where the task scheduling device 1000 includes:
a processor 1001; and
The specific manner in which the processor of the apparatus in this embodiment performs the operations has been described in detail in relation to this embodiment of the task scheduling method and will not be described in detail here.
Optionally, a computer readable storage medium has a computer program stored thereon, which when executed by a processor implements the task scheduling method in any of the task scheduling method embodiments above. The computer readable storage medium includes, for example, a memory 250 of a computer program executable by a central processor 270 of the server 200 to implement the task scheduling method described above.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (9)
1. A method for task scheduling, comprising:
acquiring co-thread code data obtained by converting a program grammar of multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel in a multi-thread mode;
adding the plurality of tasks to a scheduler;
scheduling, by the scheduler, switching execution of the plurality of tasks in a single thread according to the coroutine code data;
The method comprises the steps of obtaining multi-thread code data, wherein the multi-thread code data comprises task functions for executing a plurality of tasks in parallel by multiple threads, and before obtaining the co-thread code data obtained by performing program grammar conversion on the multi-thread code data, the method further comprises the steps of:
performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis;
adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the coroutine code data runs in the single thread, the scheduler switches the task corresponding to the task function among different task queues in the single thread through the coroutine key word, so as to realize the switching execution of the tasks.
2. The method of claim 1, wherein the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, wherein the controlling, by the scheduler, the switch execution of the plurality of tasks in the single thread according to the coroutine data schedule includes:
Judging whether a task in the execution queue is blocked or not according to the coroutine keyword in the process that coroutine code data run in the single thread;
if the task in the execution queue is blocked, suspending executing the blocked task in the execution queue, moving the blocked task to the blocking queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
3. The method of claim 2, wherein the determining, during the running of the coroutine code data in the single thread, whether the task located in the execution queue is blocked according to the coroutine key further comprises:
monitoring tasks in the blocking queue;
if the blocking condition causing the task in the blocking queue to be blocked is monitored to be eliminated, the task in the blocking queue is moved to the preparation queue through the scheduler to wait for the execution of the blocked task to resume.
4. The method of claim 2, wherein the determining, during the running of the coroutine code data in the single thread, whether the task in the execution queue is blocked based on the coroutine key further comprises:
And distributing the tasks which are initially executed in the tasks to the execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to the preparation queue.
5. The method of claim 1, wherein said controlling, by said scheduler, switched execution of said plurality of tasks in a single thread according to said coroutine data schedule, further comprises:
performing execution state monitoring on tasks in the execution queue;
and if the task in the execution queue is monitored to be completed, removing the task with completed execution from the execution queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
6. The method of claim 1, wherein prior to scheduling, by the scheduler, the switch execution of the plurality of tasks in a single thread according to the coroutine code data, further comprising:
compiling the coroutine code data to meet the program language requirement of the single-thread running environment on the running code data;
The step of controlling the switching execution of the tasks in a single thread according to the coroutine code data schedule by the scheduler comprises the following steps:
and scheduling and controlling the switching execution of the tasks in the single thread of the running environment according to the compiled coroutine code data by the scheduler.
7. A task scheduling device, comprising:
an acquisition module configured to perform: acquiring co-thread code data obtained by converting a program grammar of multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel in a multi-thread mode;
an add module configured to perform: adding the plurality of tasks to a scheduler;
a scheduling control module configured to perform: scheduling, by the scheduler, switching execution of the plurality of tasks in a single thread according to the coroutine code data;
wherein the multithreaded code data includes task functions for multithreaded parallel execution of a plurality of tasks, the task scheduling device further comprising:
a static analysis module configured to perform: performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis;
The coroutine keyword adding module is configured to execute: adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the coroutine code data runs in the single thread, the scheduler switches the task corresponding to the task function among different task queues in the single thread through the coroutine key word, so as to realize the switching execution of the tasks.
8. A task scheduling device, comprising:
a processor; and
A memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of any of claims 1 to 6.
9. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811130901.7A CN110955503B (en) | 2018-09-27 | 2018-09-27 | Task scheduling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811130901.7A CN110955503B (en) | 2018-09-27 | 2018-09-27 | Task scheduling method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110955503A CN110955503A (en) | 2020-04-03 |
CN110955503B true CN110955503B (en) | 2023-06-27 |
Family
ID=69967908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811130901.7A Active CN110955503B (en) | 2018-09-27 | 2018-09-27 | Task scheduling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110955503B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112612615B (en) * | 2020-12-28 | 2022-12-06 | 中孚安全技术有限公司 | Data processing method and system based on multithreading memory allocation and context scheduling |
CN112860401B (en) * | 2021-02-10 | 2023-07-25 | 北京百度网讯科技有限公司 | Task scheduling method, device, electronic equipment and storage medium |
CN112988355B (en) * | 2021-03-31 | 2023-12-15 | 深圳市优必选科技股份有限公司 | Program task scheduling method and device, terminal equipment and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005022384A1 (en) * | 2003-08-28 | 2005-03-10 | Mips Technologies, Inc. | Apparatus, method, and instruction for initiation of concurrent instruction streams in a multithreading microprocessor |
WO2005022381A2 (en) * | 2003-08-28 | 2005-03-10 | Mips Technologies, Inc. | Integrated mechanism for suspension and deallocation of computational threads of execution in a processor |
CN1842769A (en) * | 2003-08-28 | 2006-10-04 | 美普思科技有限公司 | Instruction for initiation of concurrent instruction streams in a multithreading microprocessor |
US7206843B1 (en) * | 2000-04-21 | 2007-04-17 | Sun Microsystems, Inc. | Thread-safe portable management interface |
CN104142858A (en) * | 2013-11-29 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Blocked task scheduling method and device |
CN104199730A (en) * | 2014-08-29 | 2014-12-10 | 浪潮集团有限公司 | Single-thread multi-task processing method based on synchronous I/O multiplexing mechanism |
-
2018
- 2018-09-27 CN CN201811130901.7A patent/CN110955503B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7206843B1 (en) * | 2000-04-21 | 2007-04-17 | Sun Microsystems, Inc. | Thread-safe portable management interface |
WO2005022384A1 (en) * | 2003-08-28 | 2005-03-10 | Mips Technologies, Inc. | Apparatus, method, and instruction for initiation of concurrent instruction streams in a multithreading microprocessor |
WO2005022381A2 (en) * | 2003-08-28 | 2005-03-10 | Mips Technologies, Inc. | Integrated mechanism for suspension and deallocation of computational threads of execution in a processor |
CN1842769A (en) * | 2003-08-28 | 2006-10-04 | 美普思科技有限公司 | Instruction for initiation of concurrent instruction streams in a multithreading microprocessor |
CN104142858A (en) * | 2013-11-29 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Blocked task scheduling method and device |
CN104199730A (en) * | 2014-08-29 | 2014-12-10 | 浪潮集团有限公司 | Single-thread multi-task processing method based on synchronous I/O multiplexing mechanism |
Non-Patent Citations (2)
Title |
---|
余志勇,刘光斌,许化龙.分布式测控系统的多线程应用程序设计.计算机工程与应用.1999,(第05期),全文. * |
杨胜哲 ; 于俊清 ; 唐九飞 ; .数据流程序动态调度与优化.计算机工程与科学.2017,(第07期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN110955503A (en) | 2020-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9772879B2 (en) | System and method for isolating I/O execution via compiler and OS support | |
US8261284B2 (en) | Fast context switching using virtual cpus | |
CN110955503B (en) | Task scheduling method and device | |
US20110219373A1 (en) | Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform | |
WO2022227410A1 (en) | Embedded terminal remote software debugging method | |
US20220414052A1 (en) | Multi-Core Processor, Multi-Core Processor Processing Method, and Related Device | |
US11182318B2 (en) | Processor and interrupt controller | |
US8769233B2 (en) | Adjusting the amount of memory allocated to a call stack | |
CN107526622B (en) | Rapid exception handling method and device for Linux | |
CN106802689B (en) | Method for implementing speed-adjustable high-precision timer under Windows operating system environment | |
US11748108B2 (en) | Instruction executing method and apparatus, electronic device, and computer-readable storage medium | |
US8601488B2 (en) | Controlling the task switch timing of a multitask system | |
JP2021157843A (en) | Method for executing instruction, device, apparatus, and computer readable storage medium | |
CN111338777B (en) | Low-delay high-stability autonomous platform interrupt response method and equipment | |
US6675238B1 (en) | Each of a plurality of descriptors having a completion indicator and being stored in a cache memory of an input/output processor | |
CN109426556B (en) | Process scheduling method and device | |
US20230096015A1 (en) | Method, electronic deviice, and computer program product for task scheduling | |
CN115794390A (en) | Task control device, electronic equipment and storage medium | |
CN112470125B (en) | Interrupt processing method, computer system, and storage medium | |
US9015720B2 (en) | Efficient state transition among multiple programs on multi-threaded processors by executing cache priming program | |
Moisuc et al. | Hardware event handling in the hardware real-time operating systems | |
CN110955507B (en) | Method for multitask access to same IIC bus based on vxWorks system | |
JPH07244594A (en) | Data processor | |
CN117093266A (en) | Instruction processing device, method, electronic device, and storage medium | |
CN114185667A (en) | Data processing method and device and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |