[go: nahoru, domu]

CN116389492A - Video analysis system, method, apparatus, and computer-readable storage medium - Google Patents

Video analysis system, method, apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN116389492A
CN116389492A CN202310484321.2A CN202310484321A CN116389492A CN 116389492 A CN116389492 A CN 116389492A CN 202310484321 A CN202310484321 A CN 202310484321A CN 116389492 A CN116389492 A CN 116389492A
Authority
CN
China
Prior art keywords
video analysis
video
task
computing center
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310484321.2A
Other languages
Chinese (zh)
Inventor
王宇桐
时晓厚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202310484321.2A priority Critical patent/CN116389492A/en
Publication of CN116389492A publication Critical patent/CN116389492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Automatic Tape Cassette Changers (AREA)

Abstract

The present disclosure provides a video analysis system, method, apparatus, computer readable storage medium. The system comprises: the client is used for initiating a video analysis task to the computing center; the computing center is configured with a container running environment, an event library and a resource scheduler, wherein the event library stores model images corresponding to various video analysis tasks, and the resource scheduler is used for determining task execution nodes from a plurality of edge working nodes and the computing center and sending the video analysis tasks to the task execution nodes; each edge working node is configured with a container running environment, a video docking module and a task execution module, the video docking module is used for acquiring video data of a video analysis task, the task execution module is used for acquiring a target model mirror image corresponding to the video analysis task based on the model mirror image of the event library, and a container is created and run according to the target model mirror image in the container running environment to obtain a video analysis result.

Description

Video analysis system, method, apparatus, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video analysis system, a method, an apparatus, and a computer readable storage medium.
Background
In the development of video processing technology, video analysis systems based on security monitoring, identification, behavior analysis and the like of the video processing technology become important components of digital transformation and commercial value mining in various industries.
In the related art, a video analysis system is composed of a cloud server and a network camera, in the video analysis system, the network camera uploads acquired video data to the cloud server, and the cloud server processes the video data in a concentrated mode, so that a video analysis task is completed.
However, the method of uploading video data to the cloud server for centralized processing occupies large network bandwidth resources and is easy to cause network congestion.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a video analysis system, method, apparatus, and computer readable storage medium, which at least overcomes to some extent the problem of large occupied network bandwidth resources and easy network congestion in related technologies.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a video analysis system comprising: the client is used for initiating a video analysis task to the computing center; the computing center is configured with a container running environment, an event library and a resource scheduler, wherein the event library stores model images corresponding to various video analysis tasks, and the resource scheduler is used for determining task execution nodes from a plurality of edge working nodes and the computing center and sending the video analysis tasks to the task execution nodes; the plurality of edge working nodes are configured with a container running environment, a video docking module and a task execution module, wherein the video docking module is used for acquiring video data of the video analysis task, the task execution module is used for acquiring a target model mirror image corresponding to the video analysis task based on the model mirror image of the event library, and a container is created and run in the container running environment according to the target model mirror image to obtain a video analysis result.
In one embodiment of the disclosure, the task execution module is further configured to stop the operation of the container if the video analysis result is obtained.
In one embodiment of the present disclosure, each edge working node is further configured with memory; the task execution module is used for acquiring the target model mirror image from the event library and storing the target mirror image model in the memory under the condition of acquiring the target model mirror image for the first time; and acquiring the target model image from the memory under the condition that the target model image is acquired for the nth time, wherein n is an integer greater than 1.
In one embodiment of the present disclosure, each edge working node is further configured with a resource probe pointer; the resource detection pointer is used for monitoring and reporting the available operation resource information of the edge working node to the computing center; the computing center is also provided with a node state memory; the node state memory is used for storing and updating the available operation resource information of the plurality of edge working nodes; the resource scheduler is configured to determine a task execution node from the plurality of edge working nodes and the computing center according to available operation resource information of the plurality of edge working nodes.
In one embodiment of the present disclosure, the resource scheduler is configured to determine, according to available running resource information of the plurality of edge working nodes, that the computing center is the task execution node if the plurality of edge working nodes do not satisfy a condition for executing the video analysis task.
In one embodiment of the disclosure, the available operating resource information includes central processing unit CPU occupancy, memory occupancy information, graphics processor GPU utilization, and video memory occupancy information.
In one embodiment of the disclosure, the task execution module is further configured to upload the video analysis result to the computing center; the computing center is also provided with a streaming media server and a file sharing server; the streaming media server is used for storing video class data in the video analysis result; the file sharing server is used for storing image class data in the video analysis result.
In one embodiment of the disclosure, the computing center is further configured with a backup network video recorder; the backup network video recorder is used for storing backup historical videos.
In one embodiment of the present disclosure, each edge working node is further configured with a message routing module; the message routing module is used for sending messages related to the video analysis task to other edge working nodes or the computing center.
In one embodiment of the present disclosure, the system further comprises: and the network cameras are used for acquiring and sending the video data to the task execution node.
According to a second aspect of the present disclosure, there is provided a video analysis method applied to an edge working node, comprising: receiving a video analysis task sent by a computing center; obtaining a target model mirror image corresponding to the video analysis task; acquiring video data corresponding to the video analysis task; and creating a container according to the target model mirror image, and operating the container to process the video data to obtain a video analysis result.
According to a third aspect of the present disclosure, there is provided a video analysis method applied to a computing center, including: receiving a video analysis task sent by a client; acquiring available operation resource information of a plurality of edge working nodes; determining task execution nodes from the plurality of edge working nodes and the computing center according to the video analysis task and the available operation resource information; and sending the video analysis task to the task execution node so as to facilitate the task execution node to execute the video analysis task.
According to a fourth aspect of the present disclosure, there is provided a video analysis apparatus for use in an edge working node, comprising: the first receiving module is used for receiving the video analysis task sent by the computing center; the first acquisition module is used for acquiring a target model mirror image corresponding to the video analysis task; the first acquisition module is further used for acquiring video data corresponding to the video analysis task; and the processing module is used for creating a container according to the target model mirror image, and operating the container to process the video data so as to obtain a video analysis result.
According to a fifth aspect of the present disclosure, there is provided a video analysis apparatus applied to a computing center, comprising: the second receiving module is used for receiving the video analysis task sent by the client; the second acquisition module is used for acquiring the available operation resource information of the plurality of edge working nodes; the determining module is used for determining task execution nodes from the plurality of edge working nodes and the computing center according to the video analysis task and the available operation resource information; and the sending module is used for sending the video analysis task to the task execution node so as to facilitate the task execution node to execute the video analysis task.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the video analytics methods described above via execution of the executable instructions.
According to a seventh aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the video analysis methods described above.
According to an eighth aspect of the present disclosure, there is provided a computer program product comprising a computer program or computer instructions loaded and executed by a processor to cause a computer to implement any of the video analysis methods described above.
The technical scheme provided by the embodiment of the disclosure at least comprises the following beneficial effects:
according to the technical scheme provided by the embodiment of the disclosure, after the computing center receives the video analysis task sent by the client, a task execution node is determined from a plurality of edge working nodes and the computing center through the configured resource scheduler, and the video analysis task is sent to the task execution node. Each edge working node is configured with a container running environment, after the edge working node receives a video analysis task, a task execution module configured by the edge working node can acquire a target model mirror image corresponding to the video analysis task according to the model mirror image in an event library of a computing center, create a container in the container running environment according to the target model mirror image, and process video data acquired by a video docking module according to the container to acquire a video analysis result. The method for completing the video analysis task by using the edge working node can reduce network bandwidth resources occupied by uploading video data to a certain extent and reduce the possibility of network congestion.
In addition, the mode that the event library and the edge working nodes are configured in the computing center can acquire needed model images from the event library can avoid the problem that the edge working nodes are required to be adaptively deployed one by one due to the difference of video analysis tasks and the isomerism of the edge working nodes, and the cost of deploying a video analysis system and subsequent system maintenance is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a video analysis system in one embodiment of the present disclosure;
FIG. 9 illustrates a flow chart of a video analysis method in one embodiment of the present disclosure;
FIG. 10 illustrates a flow chart of a video analysis method in one embodiment of the present disclosure;
FIG. 11 shows a schematic diagram of a video analytics device in one embodiment of the present disclosure;
FIG. 12 shows a schematic diagram of a video analytics device in one embodiment of the present disclosure;
fig. 13 shows a block diagram of an electronic device in one embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
FIG. 1 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 1, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
The client 110 is configured to initiate a video analysis task to the computing center 120, and receive a video analysis result fed back by the computing center 120. The embodiments of the present disclosure are not limited with respect to what processing is done on video analysis tasks specifically. For example, the video analysis task may be a target monitoring task. As another example, the video analysis task may also be a human skeleton detection task. As another example, the video analytics task may also be a target re-recognition task.
In one embodiment, computing center 120 is configured with an event library 121, a resource scheduler 122, and a container runtime environment 123. The event library 121 stores model images corresponding to various video analysis tasks. For example, in the case where the video analysis task is a target monitoring task, the model image corresponding to the video analysis task is an image of a model that can perform target monitoring processing on video data. For another example, in a case where the video analysis task is a human skeleton detection task, the model image corresponding to the video analysis task is an image of a model capable of human skeleton detection on video data.
Regarding which model images corresponding to the various video analysis tasks configured in the event library 121 specifically include the model images corresponding to the video analysis tasks, embodiments of the present disclosure are not limited. For example, the video analysis tasks corresponding to the model images configured in the event library 121 include: video segmentation tasks, human skeleton detection tasks, target tracking tasks, target re-identification tasks, human behavior detection tasks and the like.
The video data is required to be processed as follows to complete the video analysis tasks.
Video segmentation tasks: and cutting the video at certain time intervals, and outputting the segmented pictures.
Human skeleton detection task: extracting and calibrating characteristics of the character skeleton in the video through the human body key point detection model, and outputting pictures of the calibrated skeleton or extracted skeleton characteristics; the embodiment of the disclosure is not limited with respect to what model is specifically the human body key point detection model. For example, the human body keypoint detection model applied in the embodiments of the present disclosure is Open phase (a human body posture detection model).
Target detection task: and identifying the appointed target in the video through the target detection model, marking the target in the video frame by using the bounding box, and outputting a picture or video carrying the bounding box. The embodiment of the present disclosure is not limited with respect to what model the object detection mold body is. For example, the object detection model applied in embodiments of the present disclosure is yolov5Lite-s.pt (an object detection model).
Target tracking task: the continuous calibration of the same target in the continuous video frames is realized by combining the target detection model and the tracking model, the surrounding frame is used for marking the target in the video frames, the number of the target is marked, and continuous pictures or videos carrying the surrounding frame and the number are output. Wherein embodiments of the present disclosure are not limited as to what the tracking model is specifically. For example, the tracking model is a deepsort tracking algorithm model.
Target re-identification task: and extracting characteristics of the target in the video through the neural network model, matching the characteristics with the defined searching target, and outputting a re-identification picture of which the target contains a number under the video or the picture. Among them, embodiments of the present disclosure are not limited as to what the neural network model is specifically. For example, the neural Network model is ResNet-50 (Residual Network 50, a Residual Network model).
Human behavior detection task: and combining the human skeleton detection model and the action recognition model, extracting the texture action and action of the character in the video, and outputting a picture marked with the action detection result watermark. Wherein, the embodiments of the present disclosure are not limited with respect to what the motion recognition model is specifically. For example, the motion recognition model applied to the embodiments of the present disclosure is an R-CNN (Region-Convolutional Neural Networks, regional convolutional neural network) motion recognition model.
The resource scheduler 122 is configured to determine task execution nodes from the plurality of edge work nodes 130 and the compute farm 120 and to send video analytics tasks to the task execution nodes. The task execution node selected by the resource scheduler 122 may be one of the edge work nodes 130 or the computing center 120.
In one embodiment, in the case that the computing center 120 is used as a task execution node, the computing center 120 may call a target model image corresponding to the video analysis task in the event library 121, construct a container corresponding to the target model image in the container running environment 123, and run the container to process the video data to obtain a video analysis result.
Each of the plurality of edge work nodes 130 is configured with a container runtime environment 131, a video docking module 132, and a task execution module 133.
The video docking module 132 is configured to obtain video data of a video analysis task. The video data of the video analysis task may be real-time video data or pre-stored video data, which is not limited by the embodiment of the present disclosure.
The task execution module 133 is configured to obtain a target model image corresponding to the video analysis task based on the model image of the event library 121, and create and run a container according to the target model image in the container running environment 131, so as to obtain a video analysis result. Wherein, creating and running the container according to the object model image in the container running environment 131 to obtain a video analysis result, including: and creating a container according to the target model mirror image in the container running environment 131, and processing the video data by the running container to obtain a video analysis result.
In another embodiment, the task execution module 133 is further configured to stop the operation of the container if the video analysis result is obtained. After the processing result of the video analysis task is obtained, the corresponding container is stopped, and the resource occupied by the container is released, so that the resource waste can be reduced, and the resource utilization rate can be improved.
According to the technical scheme provided by the embodiment of the disclosure, after the computing center receives the video analysis task sent by the client, a task execution node is determined from a plurality of edge working nodes and the computing center through the configured resource scheduler, and the video analysis task is sent to the task execution node. Each edge working node is configured with a container running environment, after the edge working node receives a video analysis task, a task execution module configured by the edge working node can acquire a target model mirror image corresponding to the video analysis task according to the model mirror image in an event library of a computing center, create a container in the container running environment according to the target model mirror image, and process video data acquired by a video docking module according to the container to acquire a video analysis result. The method for completing the video analysis task by using the edge working node can reduce network bandwidth resources occupied by uploading video data to a certain extent and reduce the possibility of network congestion.
In addition, the mode that the event library and the edge working nodes are configured in the computing center can acquire needed model images from the event library can avoid the problem that the edge working nodes are required to be adaptively deployed one by one due to the difference of video analysis tasks and the isomerism of the edge working nodes, and the cost of deploying a video analysis system and subsequent system maintenance is reduced.
FIG. 2 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 2, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
The edge working node is configured with a storage 134 in addition to a container running environment 131, a video docking module 132 and a task execution module 133.
A task execution module 133 for acquiring a target model image from the event library 121 and storing the target model image in the memory 134 in the case of acquiring the target model image for the first time; and, in the case of acquiring the target model image for the nth time, acquiring the target model image from the memory 134, n being an integer greater than 1.
By acquiring the corresponding target model mirror image under the condition of executing a certain type of video analysis task for the first time, the situation that the target model mirror image is manually configured in a plurality of edge working nodes one by one can be avoided, the model mirror image is automatically configured in the edge working nodes, the workload of configuring the whole video analysis system is reduced, and the efficiency of configuring the video analysis system is improved.
After the task execution module 133 obtains the target model mirror image, a container is created according to the target model mirror image, and the container is operated to process the video data, so that the video data is processed by using the model corresponding to the model mirror image, and a video analysis result is obtained.
FIG. 3 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 3, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
In one embodiment, the plurality of edge work nodes 130 are configured with resource probe pointers 135 in addition to container runtime environments 131, video docking modules 132, task execution modules 133; computing center 120 is also configured with node state memory 124.
Wherein the resource probe pointer 135 is used to monitor and report the available running resource information of the edge work node to the computing center 120. The resource detection pointer 135 may monitor the available operation resources of the edge working node in real time, and generate corresponding available operation resource information, and then the resource detection pointer 135 reports the available operation resource information to the computing center 120.
The node state memory 124 is used to store and update available operating resource information for a plurality of edge operating nodes. After receiving the available operation resource information reported by the edge node, the computing center 120 stores the available operation resource information in the node state memory 124, and the node state memory 124 updates the available operation resource information reported by the edge working node at the previous time by using the available operation resource information reported by the edge working node at the next time.
The resource scheduler 122 is configured to determine task execution nodes from the plurality of edge work nodes 130 and the computing center 120 based on available operating resource information for the plurality of edge work nodes 130.
The operation resources required by different video analysis tasks are different, and the operation resources required by the video analysis tasks can be determined according to the types of the video analysis tasks. The resource scheduler 122 may determine task execution nodes from the plurality of edge work nodes 130 and the computing center 120 based on the operational resources required for the video analytics task and the available operational resource information for the plurality of edge work nodes 130.
The process by which the resource scheduler 122 determines the task execution node may include: sequencing the corresponding operation resources of the plurality of edge working nodes according to the operation resources required by the video analysis task to obtain a sequencing result; and determining a task execution node which meets the operation resources required by the video analysis task according to the sequencing result.
In one embodiment, the resource scheduler 122 is configured to determine that the computing center 120 is a task execution node when it is determined that the plurality of edge work nodes 130 do not satisfy a condition for executing the video analysis task according to available running resource information of the plurality of edge work nodes 130. That is, the computation center 120 performs the video analysis task as a task performing node in the case where none of the plurality of edge work nodes 130 satisfies the condition for performing the video analysis task.
By taking the computing center 120 as a standby task execution node, the stability of the video analysis system can be improved, and the situation that the video analysis system cannot process video analysis tasks under the conditions that a plurality of edge working nodes are insufficient in available operation resources, or network connection is not smooth, or video memory of the edge working nodes overflows is avoided.
In one embodiment, the available operating resource information of the edge node may include: CPU (Central Processing Unit ) occupancy, memory occupancy information, GPU (Graphics Processing Unit, graphics processor) utilization, and video memory occupancy information.
For example, in the case that the video analysis task has a high requirement on GPU resources, the plurality of edge working nodes may be ordered according to GPU utilization and video memory occupation information of the plurality of edge working nodes, and a task execution node for executing the video analysis task may be determined according to the ordering result.
In one embodiment, the relative demands of the video segmentation task, the human skeleton detection task, the target tracking task, the target re-recognition task, and the human behavior detection task on the CPU resources and the GPU resources are shown in Table 1 below.
TABLE 1
Video analysis tasks CPU resource requirements GPU resource requirements
Video segmentation task Low and low -
Target detection task Low and low Low and low
Target tracking task High height -
Target re-identification task Low and low In (a)
Human skeleton detection task In (a) -
Human behavior detection task In (a) High height
The method for automatically distributing the task execution nodes of the video analysis task according to the available operation resource information of the plurality of edge work nodes 130 by the resource scheduler 122 considers the available operation resources of the edge work nodes and the operation resources required by the video analysis task, thereby being beneficial to improving the resource utilization rate and the service efficiency of the edge work nodes.
FIG. 4 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 4, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
The computing center 120 is configured with a streaming media server 125 and a file sharing server 126 in addition to an event library 121, a resource scheduler 122, and a container running environment 123.
The edge working node serves as a task execution node, and after the video analysis task is completed to obtain a video analysis result, the task execution module 133 uploads the video analysis result to the computing center 120. After receiving the video analysis result, the computing center 120 stores video class data in the video analysis result in the streaming media server 125; the image class data in the video analysis result is stored in the file sharing server 126. That is, the streaming server 125 is configured to store video class data in the video analysis result; a file sharing server 126 for storing image class data in the video analysis results.
In one embodiment, after the computing center 120 obtains the video analysis result as the task execution node, the video class data in the video analysis result is stored in the streaming media server 125, and the image class data in the video analysis result is stored in the file sharing server 126.
In one embodiment, the file sharing server 126 is further configured to assist the plurality of edge working nodes 130 in implementing a unified data plane, so that each service module of different edge working nodes may call the image class data uploaded by other edge working nodes from the file sharing server 126.
FIG. 5 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 5, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
The computing center 120 is configured with a backup network video recorder 127 in addition to the event library 121, the resource scheduler 122, and the container runtime environment 123. The backup network video recorder 127 is used to store backup history video. The embodiments of the present disclosure are not limited with respect to which videos the backup history video includes. For example, the backup history video includes a backup of video data and a backup of video class data in the video analysis result.
FIG. 6 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 6, which may include: client 110, compute farm 120, and a plurality of edge worker nodes 130.
Wherein each of the plurality of edge worker nodes 130 is further configured with a message routing module 136; message routing module 136 is used to send messages related to video analytics tasks to other edge work nodes or computing centers 120. The embodiments of the present disclosure are not limited with respect to which messages related to the video analytics tasks specifically include. For example, the messages related to the video analytics tasks include: service start message, service stop message, container running state message, and message carrying analysis result.
FIG. 7 shows a schematic diagram of a video analytics system in one embodiment of the present disclosure, as shown in FIG. 7, which may include: a client 110, a computing center 120, a plurality of edge work nodes 130, and a plurality of webcams 140.
Each of the plurality of network cameras 140 may acquire video data and upload the acquired video data to the task execution node. For example, in the case that the task execution node is an edge work node, the network camera may send the acquired video data to the edge work node, and the edge work node receives the video data uploaded by the network camera through the video docking module. For another example, in the case where the task execution node is the computation center 120, the network camera may transmit the acquired video data to the computation center 120, and the computation center 120 completes the acquisition of the video data after receiving the video data.
In order to facilitate understanding of the technical solutions provided by the embodiments of the present disclosure, the technical solutions provided by the embodiments of the present disclosure will be described below with reference to the video analysis systems corresponding to fig. 1 to 7 and the video analysis system shown in fig. 8.
As shown in fig. 8, the video analysis system may include: a client 110, a computing center 120, a plurality of edge work nodes 130, and a plurality of webcams 140.
The computing center 120 is configured with an event library 121, a resource scheduler 122, a container runtime environment 123, a node state memory 124, a streaming media server 125, a file sharing server 126, and a backup network video recorder 127.
The plurality of edge work nodes 130 are configured with a container runtime environment 131, a video docking module 132, a task execution module 133, a memory 134, a resource probe pointer 135, and a message routing module 136.
After the client 110 sends the video analysis task to the computing center 120, the resource scheduler 122 configured by the computing center 120 obtains the available operation resource information of the plurality of edge nodes 130 from the node state memory 124, and determines a task execution node according to the available operation resource information. The available operating resource information of the plurality of edge nodes 130 is monitored by the respective configured resource probe pointers 135 and reported to the computing center 120 through the message routing module 136.
In the case where the task execution node is an edge work node, the resource scheduler 122 configured by the center node 120 transmits the video analysis task to the edge work node. After the edge working node receives the video analysis task, the video docking module 132 obtains video data corresponding to the video analysis task, the task execution module 133 obtains a target model image corresponding to the video analysis task from the memory 134 or the event library 121, creates a container in the container running environment 131 according to the target model image, and runs the container in the container running environment 131 to process the video data, so as to obtain a video analysis result. Wherein video data may be provided by a plurality of webcams 140.
The task execution module 133 will then feed back the video analytics to the computing center 120 via the message route 136. The computing center 120 stores video class data in the video analysis result in the streaming server 125, image class data in the video analysis result in the file sharing server 126, and a backup of the image class data in the video analysis result in the backup network video recorder 127.
In the case where the task execution node is the computation center 120, the computation center 120 acquires video data provided by the plurality of network cameras 140; a target model mirror image corresponding to the video analysis task is obtained from the event library 121, a container is created in the container running environment 123 according to the target model mirror image, and the container is run in the container running environment 123 to process video data, so that a video analysis result is obtained. Thereafter, the computing center 120 stores the video class data in the video analysis result in the streaming server 125, the image class data in the video analysis result in the file sharing server 126, and the backup of the image class data and the backup of the video data in the video analysis result in the backup network video recorder 127.
After obtaining the video analysis result, the computing center 120 feeds back the video analysis result to the client 110.
Communication connections are made between client 110, computing center 120, plurality of edge work nodes 130, and plurality of network cameras 140 via a network, which may be a wired network or a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the Internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), mobile, wired or wireless network, private network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible markup Language (Extensible MarkupLanguage, XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure sockets layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet ProtocolSecurity, IPsec), etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
The client 110 may be a variety of electronic devices including, but not limited to, a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a wearable device, an augmented reality device, a virtual reality device, and the like.
Alternatively, the clients of the applications installed in different clients 110 are the same or clients of the same type of application based on different operating systems. The specific form of the application client may also be different based on the different terminal platforms, for example, the application client may be a mobile phone client, a PC client, etc.
The computing center 120 and the plurality of edge working nodes 130 may be servers that provide various services, where the servers may be independent physical servers, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms.
The present exemplary embodiment will be described in detail below with reference to the accompanying drawings and examples.
Embodiments of the present disclosure provide a video analysis method that may be performed by any electronic device having computing processing capabilities. For example, the electronic device is an edge working section.
Fig. 9 shows a flowchart of a video analysis method in one embodiment of the present disclosure, and as shown in fig. 9, the video analysis method provided in the embodiment of the present disclosure includes the following S901 to S904.
S901, the edge working node receives a video analysis task sent by a computing center.
S902, the edge working node acquires a target model mirror image corresponding to the video analysis task.
In one embodiment, the edge working node obtains a target model image corresponding to the video analysis task, which may include: under the condition of acquiring a target model image for the first time, the edge working node acquires the target model image from an event library of a computing center and stores the target model image into a memory; in the case of acquiring the target model image for the nth time, the edge working node acquires the target model image from the memory, and n is an integer greater than 1.
S903, the edge working node acquires video data corresponding to the video analysis task.
The video data can be obtained in real time by the network camera and provided for the backward edge working node; alternatively, the video data may be pre-stored in the network device from which the edge working node obtains the video data over the network.
S904, the edge working node creates a container according to the target model mirror image, and operates the container to process the video data to obtain a video analysis result.
The method for configuring the event library and the edge working nodes in the computing center to acquire the needed model mirror images from the event library can avoid the problem that the edge working nodes are required to be adaptively deployed one by one due to the difference of video analysis tasks and the isomerism of the edge working nodes, and reduces the cost for deploying the video analysis system and the subsequent system maintenance.
Fig. 10 shows a flowchart of a video analysis method in one embodiment of the present disclosure, and as shown in fig. 10, the video analysis method provided in the embodiment of the present disclosure includes the following S1001 to S1004.
S1001, the computing center receives a video analysis task sent by the client.
S1002, the computing center acquires available operation resource information of a plurality of edge working nodes.
S1003, the computing center determines a task execution node from a plurality of edge working nodes and the computing center according to the video analysis task and the available operation resource information.
In one embodiment, the computing center determines a task execution node from a plurality of edge work nodes and the computing center according to the video analysis task and the available operation resource information, and the method may include: and determining that the computing center is a task execution node under the condition that the plurality of edge working nodes do not meet the condition of executing the video analysis task according to the available operation resource information of the plurality of edge working nodes.
By taking the computing center as a standby task execution node, the stability of the video analysis system can be improved, and the situation that the video analysis system cannot process video analysis tasks under the conditions that a plurality of edge working nodes are insufficient in available operation resources, or network connection is not smooth, or video memory of the edge working nodes overflows is avoided.
S1004, the computing center sends the video analysis task to the task execution node so that the task execution node can execute the video analysis task.
Based on the same inventive concept, two video analysis apparatuses are also provided in the embodiments of the present disclosure, as described in the following embodiments. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 11 shows a schematic diagram of a video analysis apparatus in one embodiment of the present disclosure, as shown in fig. 11, applied to an edge working node, including: a first receiving module 1101, configured to receive a video analysis task sent by a computing center; the first obtaining module 1102 is configured to obtain a target model image corresponding to a video analysis task; the first obtaining module 1102 is further configured to obtain video data corresponding to a video analysis task; and the processing module 1103 is configured to create a container according to the target model image, and run the container to process the video data, so as to obtain a video analysis result.
Fig. 12 shows a schematic diagram of a video analysis apparatus in one embodiment of the present disclosure, as shown in fig. 12, the apparatus being applied to a computing center, including: a second receiving module 1201, configured to receive a video analysis task sent by a client; a second obtaining module 1202, configured to obtain available operation resource information of a plurality of edge working nodes; the determining module 1203 is configured to determine, according to the video analysis task and the available operation resource information, a task execution node from a plurality of edge working nodes and a computing center; and the sending module 1204 is used for sending the video analysis task to the task execution node so as to facilitate the task execution node to execute the video analysis task.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1300 according to such an embodiment of the present disclosure is described below with reference to fig. 13. The electronic device 1300 shown in fig. 13 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 13, the electronic device 1300 is embodied in the form of a general purpose computing device. The components of the electronic device 1300 may include, but are not limited to: the at least one processing unit 1310, the at least one memory unit 1320, and a bus 1330 connecting the different system components (including the memory unit 1320 and the processing unit 1310).
Wherein the storage unit stores program code that is executable by the processing unit 1310 such that the processing unit 1310 performs steps according to various exemplary embodiments of the present disclosure described in the section "detailed description of the invention" above.
The storage unit 1320 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
The storage unit 1320 may also include a program/utility 1324 having a set (at least one) of program modules 1325, such program modules 1325 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1330 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 1300 may also communicate with one or more external devices 1340 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1300, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1300 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1350. Also, the electronic device 1300 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, for example, the Internet, through a network adapter 1360. As shown in fig. 13, the network adapter 1360 communicates with other modules of the electronic device 1300 over the bus 1330. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1300, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above of the present disclosure. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the section "detailed description" above of the disclosure, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in the present disclosure may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this disclosure, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Alternatively, the program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In an exemplary embodiment of the present disclosure, there is also provided a computer program product including a computer program or computer instructions loaded and executed by a processor to cause the computer to carry out the steps according to the various exemplary embodiments of the present disclosure described in the section "detailed description" above.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims.

Claims (15)

1. A video analysis system, comprising:
the client is used for initiating a video analysis task to the computing center;
the computing center is configured with a container running environment, an event library and a resource scheduler, wherein the event library stores model images corresponding to various video analysis tasks, and the resource scheduler is used for determining task execution nodes from a plurality of edge working nodes and the computing center and sending the video analysis tasks to the task execution nodes;
the plurality of edge working nodes are configured with a container running environment, a video docking module and a task execution module, wherein the video docking module is used for acquiring video data of the video analysis task, the task execution module is used for acquiring a target model mirror image corresponding to the video analysis task based on the model mirror image of the event library, and a container is created and run in the container running environment according to the target model mirror image to obtain a video analysis result.
2. The system of claim 1, wherein the task execution module is further configured to stop operation of the container if the video analysis result is obtained.
3. The system of claim 1, wherein each edge working node is further configured with a memory;
the task execution module is used for acquiring the target model mirror image from the event library and storing the target mirror image model in the memory under the condition of acquiring the target model mirror image for the first time; and acquiring the target model image from the memory under the condition that the target model image is acquired for the nth time, wherein n is an integer greater than 1.
4. The system of claim 1, wherein each edge worker node is further configured with a resource probe pointer; the resource detection pointer is used for monitoring and reporting the available operation resource information of the edge working node to the computing center;
the computing center is also provided with a node state memory; the node state memory is used for storing and updating the available operation resource information of the plurality of edge working nodes;
the resource scheduler is configured to determine a task execution node from the plurality of edge working nodes and the computing center according to available operation resource information of the plurality of edge working nodes.
5. The system of claim 4, wherein the resource scheduler is configured to determine the computing center as the task execution node if the plurality of edge work nodes do not satisfy the condition for executing the video analysis task based on available operating resource information of the plurality of edge work nodes.
6. The system of claim 4 or 5, wherein the available operating resource information includes central processing unit CPU occupancy, memory occupancy information, graphics processor GPU utilization, and video memory occupancy information.
7. The system of claim 1, wherein the task execution module is further configured to upload the video analysis results to the computing center;
the computing center is also provided with a streaming media server and a file sharing server; the streaming media server is used for storing video class data in the video analysis result; the file sharing server is used for storing image class data in the video analysis result.
8. The system of claim 1, wherein the computing center is further configured with a backup network video recorder; the backup network video recorder is used for storing backup historical videos.
9. The system of claim 1, wherein each edge worker node is further configured with a message routing module; the message routing module is used for sending messages related to the video analysis task to other edge working nodes or the computing center.
10. The system of claim 1, wherein the system further comprises:
and the network cameras are used for acquiring and sending the video data to the task execution node.
11. A video analysis method applied to an edge work node, comprising:
receiving a video analysis task sent by a computing center;
obtaining a target model mirror image corresponding to the video analysis task;
acquiring video data corresponding to the video analysis task;
and creating a container according to the target model mirror image, and operating the container to process the video data to obtain a video analysis result.
12. A video analysis method, applied to a computing center, comprising:
receiving a video analysis task sent by a client;
acquiring available operation resource information of a plurality of edge working nodes;
determining task execution nodes from the plurality of edge working nodes and the computing center according to the video analysis task and the available operation resource information;
And sending the video analysis task to the task execution node so as to facilitate the task execution node to execute the video analysis task.
13. A video analytics device, for use in an edge work node, comprising:
the first receiving module is used for receiving the video analysis task sent by the computing center;
the first acquisition module is used for acquiring a target model mirror image corresponding to the video analysis task;
the first acquisition module is further used for acquiring video data corresponding to the video analysis task;
and the processing module is used for creating a container according to the target model mirror image, and operating the container to process the video data so as to obtain a video analysis result.
14. A video analysis device for use in a computing center, comprising:
the second receiving module is used for receiving the video analysis task sent by the client;
the second acquisition module is used for acquiring the available operation resource information of the plurality of edge working nodes;
the determining module is used for determining task execution nodes from the plurality of edge working nodes and the computing center according to the video analysis task and the available operation resource information;
And the sending module is used for sending the video analysis task to the task execution node so as to facilitate the task execution node to execute the video analysis task.
15. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the video analysis method of claim 11 or 12.
CN202310484321.2A 2023-04-28 2023-04-28 Video analysis system, method, apparatus, and computer-readable storage medium Pending CN116389492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310484321.2A CN116389492A (en) 2023-04-28 2023-04-28 Video analysis system, method, apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310484321.2A CN116389492A (en) 2023-04-28 2023-04-28 Video analysis system, method, apparatus, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116389492A true CN116389492A (en) 2023-07-04

Family

ID=86965738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310484321.2A Pending CN116389492A (en) 2023-04-28 2023-04-28 Video analysis system, method, apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116389492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117750040A (en) * 2024-02-20 2024-03-22 浙江宇视科技有限公司 Video service balancing method, device, equipment and medium of intelligent server cluster

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117750040A (en) * 2024-02-20 2024-03-22 浙江宇视科技有限公司 Video service balancing method, device, equipment and medium of intelligent server cluster
CN117750040B (en) * 2024-02-20 2024-06-07 浙江宇视科技有限公司 Video service balancing method, device, equipment and medium of intelligent server cluster

Similar Documents

Publication Publication Date Title
US12056485B2 (en) Edge computing platform
US20240146771A1 (en) Inclusion of time-series geospatial markers in analyses employing a cyber-decision platform
US10007513B2 (en) Edge intelligence platform, and internet of things sensor streams system
US11983909B2 (en) Responding to machine learning requests from multiple clients
US20170124490A1 (en) Inclusion of time series geospatial markers in analyses employing an advanced cyber-decision platform
CN116389492A (en) Video analysis system, method, apparatus, and computer-readable storage medium
CN113223121A (en) Video generation method and device, electronic equipment and storage medium
CN112565404A (en) Data processing method, edge server, center server and medium
US20210035125A1 (en) Predictive AI Automated Cloud Service Turn-Up
US20160112274A1 (en) Real time simulation monitoring
CN116909748A (en) Computing power resource allocation method and device, electronic equipment and storage medium
CN115509744A (en) Container distribution method, system, device, equipment and storage medium
CN115514771A (en) Data transmission method and device, electronic equipment and computer readable storage medium
CN115378937A (en) Distributed concurrency method, device and equipment for tasks and readable storage medium
CN111353585A (en) Structure searching method and device of neural network model
CN113783960B (en) Intelligent substation equipment data processing method and related equipment
CN116664379A (en) Oblique photographic image loading method, system, device, equipment and storage medium
CN116628379A (en) Visualized graph generation method and device, electronic equipment and storage medium
CN117424896A (en) Information display method, device, equipment and medium of Internet of things equipment
CN112119405A (en) Distributed computing system with integrated data-as-a-service framework package generator
CN116450121A (en) Personnel trajectory epidemic situation prevention and control model analysis method and system thereof
CN112166415A (en) Distributed computing system with crowdsourcing engine
CN117076119A (en) Super-calculation operation scheduling method and device, electronic equipment and storage medium
CN115767179A (en) Video stream processing method, system, electronic device and storage medium
CN117873609A (en) Data processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination