US20100225658A1 - Method And Apparatus For Digitizing And Compressing Remote Video Signals - Google Patents
Method And Apparatus For Digitizing And Compressing Remote Video Signals Download PDFInfo
- Publication number
- US20100225658A1 US20100225658A1 US12/728,998 US72899810A US2010225658A1 US 20100225658 A1 US20100225658 A1 US 20100225658A1 US 72899810 A US72899810 A US 72899810A US 2010225658 A1 US2010225658 A1 US 2010225658A1
- Authority
- US
- United States
- Prior art keywords
- video
- computer
- block
- frame
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/4143—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a Personal Computer [PC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/507—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction using conditional replenishment
Definitions
- the present invention is directed generally to the field of the compression and digitization of analog video. More particularly, this invention relates to a method of effectively digitizing and compressing the video output of a computer such that it may be monitored and controlled from a remote location.
- telnet server or “daemon” is installed and started on the UNIX server.
- the daemon continually runs on the machine searching for and responding to new requests.
- a user wishing to access information on that machine starts a telnet client program which allows the user to issue a request to the daemon.
- the user After verification of the user's identity, the user has access to all of or a portion of the information on the accessed remote computer.
- the method is useful, but in many instances has limitations and many drawbacks.
- telnet access is dependent upon the server not crashing and continually running the telnet daemon. If the server fails, crashes, or stops this daemon, a system administrator must physically restart the remote computer or the daemon on-site. Thus, this scheme is reliant on both a robust server and a robust daemon. Furthermore, the telnet programs are normally limited to text.
- the server software does not allow the system administrator full access to the remote computer at all times. For example, while the computer is rebooting and starting the operating system, the daemon program is not running. Therefore, the system administrator does not have access to the server during these periods. This is a major pitfall especially if the system administrator wishes to view or edit BIOS settings or view the server restart.
- keyboard, video, and mouse (“KVM”) switches have been developed that allow a single keyboard, video, and mouse to control multiple computers.
- the computers are often remotely located from the user or system administrator's computer (i.e., the local computer). These switches route the keyboard and mouse signals of the user computer to one of the remotely located computers chosen by the user. Similarly, the video output of the chosen computer is routed to the attached local monitor. Generally the user is able to switch to any of a series of remote computers.
- a KVM switch is useful for many reasons. For example, if a user has many computers, and wants to save space or cost by eliminating extra mice, keyboards, and monitors for each remote computers.
- the cost and space saving technique is very practical in many environments including server-farms and web-hosting facilities where space constraints are crucial.
- Additional hardware solutions include intermediate routers and cables that increase the distance that may separate a user and a remote computer. These solutions can also increase the number of computers a user may control with one keyboard, monitor, and mouse. However this network is separate from existing LANs and Internet connections and may be hampered by a distance limitation.
- the KVM switches have advantages over software solutions because they are not reliant upon the remote computer to function. If a system administrator needs to control and view a computer during “boot up” or to fix a problem with BIOS, the user can accomplish this via a remote keyboard, mouse and monitor linked via a KVM switch. Conversely, this would not be possible with a software solution.
- the KVM switch does not use processing power on the remote computer. From the point of view of both the controlled computer and the local computer, it is as if the video, mouse and keyboard are directly connected to the remote computer. Thus, no additional resources on the host computer are consumed.
- KVM switches that are operating system and machine independent. As long as the KVM ports are compatible with the keyboard, video and mouse connections, and with the output/input ports of the target computer, any KVM switch can be used, regardless of the operating system. With software solutions, a separate version of the software is generally needed if the user must control a variety of computers with a variety of operating systems.
- KVM switches greatly improve the control of remote units
- KVM switches rely on direct connections for sending signals from the host computer to the keyboard, video, and mouse that degrade over distances. For example, after a certain distance, the signal degradation affects the quality of the video signal transmitted. Therefore, if a system administrator or user needs access to a computer, the user still has to be within a certain distance of the computer.
- a KVM switch whereby the keyboard, video, and mouse signals are sent over standard Internet protocols or telephone connections maybe utilized. This allows any Internet or modem enabled device with a keyboard, video and mouse to control a remote computer regardless of the physical distance between a user computer and a remote device.
- Video compression takes advantage of the redundancies in video signals, both between successive frames of video, and within each individual frame.
- the transmission of a video signal from a computer monitor output generally has large amounts of both spatial and interframe redundancies. For example, in a near idle computer, the only change between successive frames of video might be the blinking of a cursor. Even as a user types a document, a majority of the screen does not change over periods of time.
- L-TV Low-Reliable Digital Video Coding
- the L-TV product is designed such that it can read directly from the video memory of the Macintosh computer.
- some video compression techniques are used in the L-TV product such that only portions of the image that change between frames are retransmitted.
- L-TV only functions with a Macintosh computer.
- Other advances in the art are development of software based simulation systems.
- Widergren U.S. Pat. No. 4,302,775 discloses a method for comparing sub blocks of an image between successive frames of video and only encoding the differences between the blocks for transmission.
- the block-by-block comparisons are completed in the transform domain.
- the system requires extra computations necessary to compute the transform of the image thereby increasing the time necessary to complete the video compression.
- the disclosure of Widergren requires faster or more complex hardware.
- the present invention improves upon these time consuming extra computations by completing the block comparisons in the spatial domain.
- the present invention utilizes a two-level thresholding method to ensure that the block comparisons are effective.
- the present invention improves on this disclosure by using two separate methods of storing previous frames and comparing the current frame of video with the previous frame. Furthermore, the present invention improves upon the efficiency of the cache comparisons by comparing the cyclic redundancy check for each block being compared.
- Carr et al. U.S. Pat. No. 5,008,747 discloses a method for block-by-block comparison of sequential frames of video. Only changed pixels are retransmitted between frames of video. Carr et al. teaches a method whereby the changed pixels are stored in a matrix which is vector-quantized to one of a standard set of matrices. Thus Carr et al, discloses a video compression technique that uses temporal redundancies to reduce the data that must be transmitted. However, Carr et al. fails to disclose a method and apparatus capable of providing a reduced-time transmission of video. Further, Carr et al. fails to disclose a method of quantizing pixels before comparing frames. Thus the disclosures of Carr et al. would not be suited for remotely controlling a computer because it fails to teach methods that take into account noise that maybe introduced into the video through digitization errors.
- Astle U.S. Pat. No. 5,552,832 discloses a camera that receives analog video signals and converts said signals to digital signals by implementing a microprocessor that divides the signals into blocks. The blocks of video are then classified and run-length encoded.
- Astle discloses a video compression method that operates on blocks of pixels within an image.
- the present invention improves upon the compression techniques disclosed by taking advantage of temporal redundancies between images. Further, the present invention increases redundancy through noise elimination and a color lookup table.
- Perholtz et al. U.S. Pat. No. 5,732,212 discloses a method for digitizing video signals for manipulation and transmission.
- the patent discloses a method whereby video raster signals from the data processing device are analyzed to determine the information displayed on a video display monitor attached to the data processing device.
- Perholtz et al. teaches a method for digitizing and compressing video. However, the method compresses video by analyzing the content of the video and sending said content. Thus in general, Perholtz does not teach a method in which the full graphical interface is displayed to the user.
- the present invention improves upon the disclosure of Perholtz by providing an improved graphical interface to the user. Further, the present invention improves upon this disclosure by compressing the video based upon spatial and temporal redundancies.
- Frederick U.S. Pat. No. 5,757,424 discloses a system for high-resolution video conferencing without extreme demands on bandwidth.
- the system disclosed creates a mosaic image by sampling portions of a scene and combining those samples. This system allows for the transmission of video over low bandwidths.
- Frederick's system in general is used within a camera for transmitting video. Though Frederick teaches a way to reduce the data necessary for transmission, Frederick does not teach methods for comparing frames of video. In addition, Frederick does not teach a system whereby the video that must be sent is compressed using lossless compression.
- the present invention overcomes the limitations of Frederick's disclosures by using lossless compression in the spatial domain and two temporal redundancy checks. Further, the present invention teaches video compression in the context of controlling a remote computer rather than in the context of transmitting video from a camera.
- Schneider U.S. Pat. No. 6,304,895 discloses a system for intelligently controlling a remotely located computer.
- Schneider further discloses a method of interframe block comparison where pixel values that even slightly change are retransmitted. This necessarily leads to retransmission of noisy pixels unnecessarily.
- Schneider will retransmit an entire block if a threshold percentage of pixels within the block have changed.
- the present disclosure overcomes these shortcomings by recognizing minor changes due to noise, by implementing a more efficient calculation method and with a cache capable of storing previous blocks. Furthermore, the present disclosure recognizes significant changes (i.e. a pixel changing from black to white due to a cursor). In addition, slight color variations will be smoothed due to the color code and noise reduction methods of the present invention.
- Pinkston U.S. Pat. No. 6,378,009 teaches a method of sending control, status and security functions over a network such as the Internet from one computer to another.
- Pinkston discloses a switching system that packetizes remote signals for the Internet, no video compression methods or conversions are disclosed. Instead, Pinkston teaches a method whereby a system administrator can access a KVM switch remotely over the Internet and control the switch. Therefore, in and of itself, Pinkston's disclosures would riot allow a remote computer to be operated over a low-bandwidth connection.
- the digitization of a video signal and its subsequent compression allows a computer to be controlled remotely using standard Internet protocols.
- the compression allows an interface to utilize digital encryption techniques known in the art.
- Non-digital KVM switches, in transmitting analog signals, do not allow or interface well with digital encryption schemes, such as 128-bit encryption. If a computer with sensitive information needs to be controlled from a remote location, there needs to be protection from potential hackers or competitors.
- KVM switch that allows for near real time transmission of compressed video.
- the compression must be efficient enough to transmit video in near real-time over modem bandwidths.
- the compression must not be too lossy, because the resulting image must be discernible.
- the KVM switch should work across multiple platforms (e.g. Macintosh, IBM compatible, and UNIX). Therefore, the switch cannot take advantage of platform dependent GUI calls, or similar system dependent codes which indicate when and where updates in the video are needed.
- the present disclosure provides an improved video compression algorithm that offers efficient bandwidth usage and accurate video transmission.
- the present invention is directed to keyboard, video, and mouse control systems.
- the disclosure relates to a method and device for the digitization and compression of video signals such that the signal is transmitted via a modem, Internet connection, LAN/WAN, etc.
- the present invention includes a corresponding decompression technique that allows video signals to be displayed on a monitor. More particularly, in the preferred embodiment, this compression technique allows for the viewing of a remote computer's video output on a local video output device such as a monitor.
- the invention can be interfaced with a KVM switch so that multiple remote computers can be controlled and monitored.
- the keyboard and mouse signals are transmitted over standard modem and Internet connections synchronized with the video transmission.
- the video signal is transmitted from a remote computer to a local computer whereas the keyboard and mouse signals are transmitted from the local computer to the remote computer.
- the present invention allows for platform independent communication between computers.
- the local computer can control one or more remote computers utilizing a variety of computer platforms, including, but not limited to Windows, Mac, Sun, DEC, Alpha, SGI, IBM 360, regardless of the operating system of the local computer.
- the present invention may be used to control a remote serial terminal device, such as a printer, fax machine, etc.
- a serial terminal device can be connected directly to the present invention or through a serial concentrator and can be controlled from the local application.
- the serial concentrator is linked with the keyboard, video, and mouse.
- the device uses compression techniques that have been designed to improve video transfer times for video having characteristics exhibited by computer monitor output.
- the compression can be accomplished using readily available hardware providing a viable device that would allow a remote computer to be controlled via a local keyboard, video and monitor equipped computer, so long as the remote device and the local keyboard, video and monitor can communicate via the Internet, a direct modem connection, or a LAN/WAN etc.
- the video compression does not use operating system specific hooks, nor does the compression employ platform specific GDI calls. Instead, the algorithms take advantage of spatial and temporal redundancies in the video.
- analog video is sent to an A/D converter.
- the digitization of the analog video is necessary in order for the video to be transmitted using an Internet protocol.
- a detrimental side effect of the digitization process is the introduction of quantization errors and noise into the video.
- the next step in the present invention is to eliminate the A/D conversion noise via histogram analysis.
- This noise elimination is done by first dividing a frame of video into logical two-dimensional blocks of pixels. Many different sizes of blocks may be used, for example 8 ⁇ 8 pixels, 32 ⁇ 32 pixels, 64 ⁇ 32 pixels, etc. Different block sizes may be used depending on the size of the entire image, the bandwidth of the connection, etc. After the image is divided into blocks, the noise reduction algorithm is completed on each block separately.
- a histogram of pixel values is created and sorted by frequency so that it is possible to identify how often each pixel value occurs. Less frequent pixel values are compared to more frequently occurring pixel values. If the less frequently occurring pixels are close in pixel value to the more frequently occurring pixel values, color values are mapped to the closest high frequency pixel value. To determine how close pixel values are, a distance metric is used based on the red, green, and blue (“RGB”) components of each pixel. In alternative embodiments, a similar distance metric can be used, based on the appropriate components of the pixel for that embodiment.
- RGB red, green, and blue
- the purpose of the noise reduction algorithm is to increase the redundancy in an image by eliminating the superfluous noise introduced by the A/D converter. For example, suppose an 8 ⁇ 8 pixel block size is used and the algorithm is operating on this particular block. Further, assume that of the 64 pixels in the current block, 59 are blue, 4 are red, and 1 is a light blue. In this example, a low frequency threshold is defined as any pixel values that occur less than 5 times and a high frequency threshold is defined as any pixel value that occurs more than 25 times within a block. In general, pixel values between these thresholds are ignored for the noise reduction analysis. Therefore, the algorithm determines that the 4 red pixels and the 1 light blue pixel occur rarely, and therefore might be noisy.
- the 4 red pixels and the 1 light-blue pixel are compared with the more frequent pixel values (i.e. in this case the blue value).
- a pre-determined distance-threshold is used. If the distance between the less frequent pixel and the more frequent pixel is within this distance-threshold, then the less frequent pixel value is converted to the more frequent pixel value.
- the light-blue pixel is close enough in value to the blue pixel.
- the light-blue pixel is mapped to the blue pixel.
- the red pixels occur rarely, the distance in value between the red pixel value and the blue pixel value is large enough so that the red pixels are not converted.
- one method of compressing color video is to use fewer bits to represent each pixel.
- a common video standard uses 8 bits to represent the red component of video, 8 bits to represent the green component of video, and 8 bits to represent the blue component of video. This representation is commonly referred to as an “RGB” representation. If only the four most significant bits from the red, green, and blue components of the video are used instead of all 8-bits, the total amount of data used to represent the video is reduced by 50 percent.
- the present invention uses a more intelligent method of converting an RGB representation of pixels into a compact representation.
- the method and apparatus of the present invention uses a color look-up table that maps a specific RGB value to a more compact form. Both the compression device and the decompression device use the same look-up table. Further, different look-up tables can be used depending on bandwidth availability, the capabilities of the local display device, etc.
- the color look-up table is used to implement the noise reduction color conversion.
- a map of RGB values to color code values is created. If a less frequently occurring pixel value needs to be adjusted to a similar more frequent color, this is accomplished through the use of the color lookup table. The less frequently occurring color is mapped to the same color code as the highly frequent occurring color.
- the noise is efficiently removed from each block, while at the same time, the number of bits used to represent each pixel is reduced.
- interframe block comparison In addition to the methods of noise reduction, improved methods of interframe block comparison are disclosed. Specifically, temporal redundancy is identified and reduced, thereby limiting the bandwidth necessary for transmission of the remote computer video output.
- the present invention uses a unique two-level thresholding method to determine if areas of the frame have changed.
- the present invention uses two frame buffers as input. The first is the newly captured frame buffer. The second is the compare frame buffer.
- the compare frame buffer contains the image data from previously captured frame buffers.
- the algorithm divides each of these frame buffers into blocks of pixels. Any block size may be used including 8 ⁇ 8, 32 ⁇ 32, 64 ⁇ 32 etc, as well as other irregular block sizes. Different block sizes may be used depending on bandwidth requirements, image size, desired compression yields, etc.
- the algorithm processes one block of pixels at a time. For each pixel, the algorithm computes the difference between the color components of the current frame buffer pixel and the compare frame buffer pixel. From this, a distance value is computed. This process is done for each pixel in the block.
- Each of these distance values is compared with a “pixel threshold.” If the distance value exceeds the pixel threshold, the amount it exceeds the threshold by is added to a distance sum. This running sum is calculated based on various equations for all pixels in the block.
- the distance sum is then compared with a “cell threshold.” If the distance sum exceeds the cell threshold, then the block of pixels is considered changed in comparison to the previous block. If the block of pixels has changed, the compare frame buffer will be updated with this new block. Further, this new block will be sent in a compressed format to the local user.
- the block is considered to be unchanged. Neither the compare frame buffer, nor the local user's screen is updated.
- This algorithm is ideal for locating areas of change in that it can detect a large change in a few pixels or a small change in a large number of pixels.
- the method proves more efficient and accurate as compared to an algorithm that simply counts the number of changed pixels in a cell. With such an algorithm, if a very few pixels within the cell changed drastically (for example, from black to white), the algorithm would still consider the cell to be unchanged since the overall summation would not exceed a low threshold. This will often lead to display errors in the transmission of computer video.
- a percentage threshold algorithm would not register this change leading to a display error.
- a percentage threshold algorithm by only looking at the number of pixels within a block that have changed, generally fails at recognizing a case in which a few pixels change a lot.
- the present invention by virtue of the two-level thresholding method and apparatus recognizes that the block of pixels has significantly changed between frames of video.
- the second temporal compression method relies on a cache of previously transmitted frames. An identical cache is synchronized between the remote device and the user's local computer. Like the previous temporal redundancy check, this second check is performed on a block of pixels within a frame. Again, any block size may be used, for example, 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32 or 64 ⁇ 32.
- the cache check begins whenever a cell changes.
- the cache check compares the current block with corresponding blocks from previous frames.
- the cache can store an arbitrarily large number of previous frames. A higher percentage of cache hits is more likely to occur with larger cache sizes.
- the memory and hardware requirements increase with an increase in cache size. Further, the number of comparisons, and thus the processing power requirements, also increase with a larger cache size.
- a “cache hit” is defined as locating a matching block in the cache.
- a “cache miss” is defined as not finding the current block in the cache.
- both the remote and local devices update the cache, by storing the block within the cache. Since the cache is of limited size, older data is overwritten.
- a simple algorithm can be employed to overwrite the oldest block within the cache. The oldest block can be defined as the least recently transmitted block.
- the new block In order to search for a cache hit, the new block must be compared with all corresponding blocks located within the cache. There are several ways in which the new block can be compared with the previous blocks located within the cache. In the preferred embodiment, a cyclic redundancy check (“CRC”) is computed for the new block and all corresponding blocks.
- the CRC is similar to a hash code for the block.
- a hash code is a smaller, yet unique representation of a larger data source. Thus, if the CRCs are unique, the cache check process can compare CRCs for a match instead of comparing the whole block. If the CRC of the current block matches the CRC of any of the blocks in the cache a “cache hit” has been found.
- the CRC is a smaller representation of the block, less processing power is needed for comparing CRCs. Further, it is possible to construct a cache in which only the CRCs of blocks are stored on the remote side. Thus, using a CRC comparison saves memory and processor time.
- a similar hash code or checksum can be used.
- an algorithm similar to the one used in the first temporal redundancy check can be applied to the cache check.
- such an algorithm can be less susceptible to noise.
- a method and apparatus can retransmit a difference frame, whereby only the changes between the current frame and the previous frame are transmitted.
- these methods of transmitting difference frames can cause frequent synchronization errors.
- less addressing is potentially needed in determining where each block is located within an image than if the decision to retransmit is performed at pixel granularity.
- each block that must be transmitted is first compressed.
- the blocks are compressed using the Joint Bi-level Image Group (JBIG) lossless compression technique.
- JBIG Joint Bi-level Image Group
- JBIG is lossless and was designed for black and white images, such as those transmitted by facsimile machines.
- the present invention compresses and transmits color images. Therefore, in order to utilize the JBIG compression technique, the color image must be bit-sliced and the subsequent bit-planes must be compressed separately.
- a bit-slice of a color image is created by grabbing the same bit from each pixel across the whole image.
- the color look-up table uses a compact form in which the most significant bits of each pixel are stored first, and the lesser significant bits are stored last. Thus, the first bit planes will contain the most significant data and the last bit-planes will contain the least significant data.
- the method and apparatus compresses and transmits the most significant bits of the frame first.
- the local computer will receive video from the remote computer progressively, receiving and displaying the most significant bits of the image before receiving the remaining bits.
- Such a method is less sensitive to changes in bandwidth and will allow a user to see the frame of video as it is transmitted, rather than waiting for all details of the frame to be sent.
- the device is also capable of calibrating the analog to digital conversion automatically “on the fly” so that the whole range of digital values is used. For example, if the device is supposed to transmit values between 0 and 255 (i.e., general pixel depth values), but instead only transmits values between 10 and 245, it will dynamically adjust the gain of the A/D converter to take advantage of the full range of digital values. This adjustment can be done for the red, green and blue components on an individual basis or a cumulative basis. By adjusting this range, the user receives more accurate representations of the video.
- the decompression device includes a device capable of bi-directional digital communications. Using this communication device, the decompression device is able to receive video data from the compression device and transmit keyboard and mouse data. In an alternate embodiment, the decompression device also includes a means to control a serial device by transmitting serial data. Thus, the decompression device enables a local user to control a remote computer using a local keyboard, video, mouse, and serial device.
- the decompression device reconstructs frames of video based on the messages received from the compression device.
- the decompression device contains a frame buffer with the most up-to-date video data.
- the data in the frame buffer is sent to a display device so that the user can view the data from the remote computer.
- the image in the frame buffer is constructed using a combination of data from a cache and transmitted data from the remote device.
- the remote device indicates what areas of the remote computer video yielded “cache hits” and what areas are retransmitted.
- the decompression device constructs the frame buffer based on these indications.
- a cache that remains synchronized with the cache on the compression device.
- the cache is updated. Both the compression device and the decompression device use the same method for updating the cache by overwriting older data.
- the compression device sends video data that has been compressed using a lossless compression algorithm such as JBIG. Therefore, further disclosed is a method and apparatus which reverses this lossless compression.
- This decompression method and apparatus recognizes the changed areas of the image based on flags transmitted by the compression device. From this information, the decompression technique reconstructs the full frame of video.
- the frame of video is converted to a format that may be displayed on the local video monitor by reversing the color-table conversion.
- the decompression method is able to send the raw frame of video to the operating system, memory, or other location such that it may be received and displayed by the monitor.
- the decompression device like the compression device stores a local copy of the color-code table.
- the device can then convert the data from the remote computer into a standard RGB format for display on the local monitor.
- the decompression method can be implemented in a variety of ways. For example, in one embodiment, it is implemented as a software application that can be run in, for example, the Windows OS on an Intel Pentium powered PC. In an alternate embodiment, the decompression technique can be implemented such that it may run within a web browser such as Internet Explorer or Netscape® Navigator®. Such an embodiment would be more user friendly, therefore reducing the need for the installation of additional software on the local computer. Finally, in yet another embodiment, the decompression can be implemented in a device composed of a microprocessor and memory. Such an embodiment would further limit the necessary software stored on the local machine.
- the present invention Since the present invention is used for controlling a remote computer from great distances, there is a need to ensure that the transmission of the video signals is secure. If not, there exists the potential that hackers or competitors could view or control a user's computer. Therefore, the present invention was designed to easily integrate with digital encryption techniques known in the art.
- a 128-bit encryption technique is used both to verify the identity of the user and to encrypt and decrypt the video stream transmission.
- a 128-bit public key RSA encryption technique is used to verify the user, and 128-bit RC4 private key encryption is used for the video streams.
- this video compression apparatus and method is used to allow a local computer access to a remote computer.
- the compression and device is not limited to such an embodiment, and can be applied to future needs for the transmission of similar types of video in near real-time over low bandwidths.
- FIG. 1A illustrates an overview of the preferred embodiment of the present invention in which the video compression method and apparatus are utilized between a local computer controlled by a remote computer, so long as both are connected via an agreed upon protocol.
- FIG. 1B illustrates an alternate embodiment, in which the compression device is combined with a KVM switch, such that a local user can control one of many remote computers.
- FIG. 2 depicts a block diagram of the preferred embodiment of the compression device including hardware used to interface with the remote computer and the communications device of digitizing and compression signals of the present invention.
- FIG. 3A depicts a block diagram of one embodiment of the decompression device, whereby all decompression is done in software on a local computer.
- FIG. 3B depicts a block diagram of an alternate embodiment of the decompression device, in which the decompression apparatus is a separate hardware device.
- FIG. 4 illustrates a flowchart depicting an overview of the video compression algorithm.
- FIG. 5A depicts a more detailed flowchart of the compression algorithm showing and color-code table.
- FIG. 5B depicts a detail of a flowchart of the compression algorithm including how the cache testing and JBIG compression fit within the overall algorithm
- FIG. 6 depicts a flowchart of the nearest match function integrated with the color code table.
- FIG. 7 depicts a flowchart of the Noise Filter & Difference Test.
- FIG. 8 depicts an overview flowchart of the decompression method including integration with an application on the local computer.
- FIG. 9 depicts a more detailed flowchart of the decompression algorithm
- FIG. 10 illustrates an example of an alternate configuration of the present invention in which multiple inputs of four local computers in conjunction with KVM switches are utilized to control remote servers.
- FIG. 11 illustrates an alternate configuration of the present invention in which 8 local computers control 256 servers.
- FIG. 12 illustrates an alternate configuration wherein 8 local computers control 1024 remote servers.
- FIG. 13 illustrates an example of an alternate embodiment of the present invention wherein 16 local computers control 256 remote servers.
- FIG. 1A represented is a block diagram of the preferred embodiment of the present invention including a computer system for accessing and controlling a remotely located computer system, the preferred embodiment in which the present invention would be used.
- the term “local” will be from the of the user who wishes to access a computer at a remote location.
- the term “remote” is at a different location from the user, and is accessible via the present invention. Therefore, the phrase “remote computer” refers to a computer with a direct connection to the apparatus of the present invention.
- video out of remote computer 101 connects to compression device 103 via standard monitor connection 105 .
- keyboard input/output is connected via standard keyboard connection 107 and the mouse input/output is connected to compression device 103 via standard mouse connection 109 .
- a user accesses remote computer 101 via local computer 111 .
- Local computer 111 is connected to monitor 113 , keyboard 115 , and mouse 117 via monitor connection 119 , keyboard connection 121 , and mouse connection 123 .
- monitor 113 , keyboard 115 , and mouse 117 are wired separately.
- monitor connection 119 , keyboard connection 121 , and mouse connection 123 consist of separate standard cables known in the art.
- any method of connecting monitor 113 , keyboard 115 , and, mouse 117 to local computer 111 may be used with the present invention.
- an alternative method is one in which keyboard 115 and mouse 117 connect to local computer 111 via a shared USB connection.
- keyboard connection 121 and mouse connection 123 might be one physical cable.
- keyboard 115 and mouse 117 can connect to local computer via a wireless connection.
- Compression device 103 includes communication device 125 , and local computer 111 includes local communication device 126 , both of which are capable of bi-directional digital communication via communications path 127 .
- Communication device 125 and local communication device 126 may include modems, network cards, wireless network cards, or any similar device capable of providing bi-directional digital communication.
- communications path 127 may include a telephone, the Internet, a wireless connection, or any other similar device capable of providing bi-directional digital communication.
- Communication device 125 and local communication device 126 enable compression device 103 and local computer 111 to communicate via any standard agreed upon protocol. Examples of these protocols include, but are not limited to, Transmission Control Protocol/Internet Protocol (TCP/IP), and User Datagram Protocol (UDP).
- TCP/IP Transmission Control Protocol/Internet Protocol
- UDP User Datagram Protocol
- Compression device 103 receives and analyzes the video signals from remote computer 101 via standard monitor connection 105 .
- Compression device 103 analyzes and converts the video signal so that it may be packaged for transmission via a standard Internet protocol.
- Local computer 111 receives the transmissions from compression device 103 via the bi-directional communications provided by communication device 125 , local communication device 126 , and communications path 127 and translates the signal via a decompression technique corresponding to the compression techniques of the present invention.
- local computer 111 receives signals from keyboard 115 and mouse 117 via keyboard connection 121 , and mouse connection 123 . These signals are packaged on top of a standard Internet protocol, sent to local communication device 126 and transmitted to communication device 125 via communication path 127 . Compression device 103 receives these signals from communication device 125 and transmits them to remote computer 101 via standard keyboard connection 107 and standard mouse connection 109 .
- the present invention allows a user at local computer 111 to control remote computer 101 as if the user were physically located at remote computer 101 .
- FIG. 1B depicts an alternate embodiment of the present invention in which compression device 103 as depicted in FIG. 1A is combined with KVM switch 129 .
- local computer 111 is capable of controlling either of four remote computers 101 .
- KVM switch 129 can control any series of remote computers 101 in a similar manner.
- KVM switch 129 has four standard monitor connections 105 , four standard keyboard connections 107 , and four standard mouse ports 109 .
- the local user can switch control between each of the four remote computers 101 .
- FIG. 2 depicts a hardware diagram of compression device 103 of the preferred embodiment of the present invention.
- FIG. 2 is one embodiment in which the compression and digitization of the present invention may be implemented.
- the first step in compressing the video is the conversion of the video from analog to digital, completed by A/D converter 201 .
- A/D converter 201 receives analog red signal 203 , analog green signal 205 , analog blue signal 207 , horizontal synch signal 209 , and vertical synch signal 211 .
- Clock 213 drives A/D converter 201 using means commonly employed in the art.
- A/D converter 201 The outputs of A/D converter 201 are shown as R-out 215 , G-out 217 , and B-out 219 . In the preferred embodiment, these outputs are used to represent the red component, green component and blue component of the digitized signal respectively.
- A/D converter 201 outputs pixels (e.g. one pixel at a time) and the results are stored in pixel pusher 221 .
- Pixel pusher 221 communicates with microprocessor 223 via communication bus 225 .
- Pixel pusher 221 can also communicate with frame buffer 227 and JBIG Compression device 229 using communication bus 225 .
- Communication bus 225 is connected to network interface card 231 and dual universal asynchronous receiver transmitter (DUART) 233 .
- DUART 233 interfaces with keyboard port 235 and mouse port 237 .
- A/D converter 301 , keyboard port 235 , and mouse port 237 allow compression device 103 to interface with remote computer 101 .
- network interface card 231 allows compression device 103 to interface with communication device 125 .
- Compression device 103 receives analog video signals, output keyboard and mouse signals, and communicates with local computer 111 via communication device 125 .
- JBIG compression device 219 , microprocessor 223 , flash 239 , and random access memory 241 compression device 103 pictured in FIG. 2 can be programmed and configured to implement the video processing methods of the present invention disclosed herein.
- FIG. 3A illustrates decompression software 301 interacting with local computer 111 .
- Local computer 111 runs operating system 303 capable of receiving data from local communication device 126 via operating system data link 305 .
- Operating system data link 305 utilizes shared memory, a memory bus, or other device drivers.
- Local communication device 126 receives data from compression device 103 over communications path 127 .
- operating system 303 loads the decompression software 301 like any other process, from a computer readable medium 307 via computer readable medium to operating system data link 309 .
- Decompression software 301 then accesses the data received from local communication device 126 .
- Decompression software 301 is used to decompress data received from local communication device 126 and convert the data into data that can be interpreted by video card 311 . The data is then transmitted to video card 311 via operating system 301 where it is then transferred to video card 311 via operating system data link 313 .
- decompression software 301 receives signals from keyboard 115 via operating system's 303 operating system to keyboard connection 315 which connects to keyboard port 317 .
- decompression software 301 receives signals from mouse 117 , via operating system's 303 operating system to mouse connection 319 to mouse port 321 .
- FIG. 3B shows a decompression device 323 that can accomplish the same decompression as decompression software 301 .
- decompression device 323 replaces local computer 111 and further includes local communication device 126 .
- Monitor 113 , keyboard 115 , and mouse 117 attaches to decompression device 323 via the monitor connection 119 , keyboard connection 121 , and mouse connection 123 via monitor port 325 , keyboard port 327 , and mouse port 329 respectively.
- the data from monitor port 325 , keyboard port 327 , and mouse port 329 communicates with memory 331 and microprocessor 333 to run the decompression methods of the present invention.
- the decompression method receives data from local communication device 126 and transmits a decompressed version of the data to monitor 119 .
- These connections enable decompression device 323 to send data from keyboard port 327 and mouse port 329 to local communication device 126 .
- Local communication device 126 transmits the data over the compression link 127 .
- These connections also enable decompression device 323 to receive data from local communication device 126 and transmit the data to video port 325 .
- One skilled in the art will readily appreciate that there are any number of ways to implement such a configuration utilizing a combination of hardware and/or software.
- FIG. 4 depicts the function of the compression and digitization apparatus of the present invention.
- the decompression method is implemented by compression device 103 , which connects with communication device 125 and standard monitor connection 105 of remote computer 101 .
- the compression process begins at capture image block 401 where data is captured from standard monitor connection 105 .
- Capture image block 401 is implemented in decompression device 103 , by pixel pusher 221 .
- the video is converted from VGA analog video to a digital representation of the signal.
- Pixel pusher 221 enables capture image block 402 to grab the raw data and passes it to the frame buffers.
- Frame store block 402 is a method implemented by device frame buffers 227 .
- Frame store block 402 stores a whole frame of video in frame buffer 227 .
- the resulting digital representation of the image is divided into a plurality of pixel blocks.
- the compression process is performed on each pixel block until the entire image has been compressed.
- the block size may be arbitrarily large, however, in the preferred embodiment the image is divided into blocks which are pixels 64 by 32 pixels.
- each block of pixels is filtered and translated from a RGB representation to a color code representation.
- the process of filter block 403 is implemented in compression device 103 by microprocessor 223 .
- the filtering is designed to reduce the number of different colors present in each block by converting less frequently occurring colors to more frequently occurring colors. Noise introduced by the A/D converter distorts the pixel values of some pixels.
- the filtering recognizes pixels that are slightly distorted and adjusts these pixels to the correct value. Such filtering creates an image with greater redundancy, thus yielding higher compression ratios.
- the filtering completed in filter block 403 operates on one block of pixels at a time.
- the size of the block can vary based on bandwidth requirements, the size of the image, etc.
- the filtering is implemented as part of the color code conversion process.
- the color code table is a commonly used compression method of representing colors using fewer bits than if kept in RGB format. By using fewer bits, less information must be transmitted with each frame, allowing video to be transmitted at lower bandwidths.
- a variety of color code tables may be used depending on the desired number of unique colors in the image, bandwidth restrictions, etc.
- the color code table uses the results of the noise filter to convert less frequently occurring pixel colors to more frequently occurring colors.
- the less frequently occurring pixel values are given the same color code representation as the more frequently occurring pixel values.
- the noise reduction and color code conversion is accomplished at the same time.
- Compression device 103 keeps a cache of recently transmitted images. Such a cache can be implemented and stored in ram 241 . After noise elimination and image conversion, the compression process compares the most recent block with the corresponding block of pixels in recently transmitted images. This check is executed by “cache hit” check 405 .
- the methods of “cache hit” check 405 are implemented in compression device 103 by microprocessor 223 . If the most recently transmitted block is the same as the block stored in the cache, there is no need to retransmit the image. Instead, as noted in cache hit message block 407 , a “cache hit” message is sent to the local computer, indicating that the most recently transmitted block is already stored in the cache. Cache hit message block 407 is also implemented in compression device 103 by microprocessor 223 .
- update check 409 checks to see if the current block of pixels is similar to the corresponding block in the image most recently transmitted. This can also be implemented before “cache hit” check 405 , or in parallel with “cache hit” check 405 .
- the main purpose of update check 409 is to check if the block has changed since the last frame. If the block has not changed, there is no need to send an updated block to the local computer. Otherwise, the block is prepared for compression in bit plane block 411 .
- this update check 409 uses a different technique than the cache check. With two ways of checking for redundancy, higher compression can result. Both the methods of update cache check 409 and the methods of bit plane block 411 are implemented in compression device 103 by microprocessor 223 .
- the cache is updated, and the data is compressed before being sent to the TCP/IP stack.
- the image is compressed using the IBM JBIG compression algorithm. JBIG is designed to compress black and white images. However, the image to be compressed is in color. Therefore, bit planes of the image are extracted in bit plane block 411 and each bit plane is compressed separately by compression block 413 . Finally, the compressed image is sent to the local computer. JBIG compression device 229 implements send compressed message block 415 . Send compressed message block 415 sends the compressed video to server stack block 417 . Server stack block 417 , implemented on NIC 231 enables the compressed video to be sent to local communication device 126 using an Internet protocol (in this case TCP/IP).
- TCP/IP Internet protocol
- FIG. 5A and FIG. 5B provide detailed flowcharts of a preferred embodiment of the compression process.
- the video capture is done at a rate of 20 Frames per second in VGA capture block 501 .
- VGA capture block is implemented by pixel pusher 221 which receives the output of the A/D conversion process. Standard monitors often update at refresh rates as high as 70 times per second. As a rate of 20 frames per second is significantly less frequent, this step limits the amount of data that is captured from the computer. Thus, this first step reduces the bandwidth needed to transmit the video.
- the data is outputted in RGB format where 5 bits are allocated to each color. This allows for the representation of 32,768 unique colors. However, other formats capable of storing more or less colors may be used depending on the needs of the users and the total available bandwidth.
- VGA capture block 501 After receiving the digitized signal, VGA capture block 501 transmits the raw data to frame buffer 0 503 and frame buffer 1 505 .
- a frame buffer is an area of memory capable of storing one frame of video. Two frame buffers allow faster caching of image data. Raw frames of video are alternatively stored in frame buffer 0 503 , and frame buffer 1 505 . This allows the next frame of video to be captured even as compression is being performed on the previous frame of video.
- frame buffers 227 are a device that are capable of implementing frame buffer 0 503 and frame buffer 1 505 .
- the frame buffer that contains the most recent image is used as data for nearest color match function 509 as is the data in color code from client data block 511 .
- Color code from client data block 511 is stored in flash 239 .
- Nearest color match function 509 is a method that can be implemented as a device by microprocessor 223 . A detailed explanation of nearest color match function 509 is shown in FIG. 6 .
- the resulting color code table 513 from nearest color match function 509 is used for color code translation 515 .
- the process translates the RGB representation of each pixel into a more compact form via this color code table translation.
- Color code table 513 is generated by nearest color match 509 and can be stored in ram 241 .
- Color code translation 515 translates a block of RGB values to their color code values and stores the result in coded frame buffer 517 .
- Coded frame buffer 517 can also be implemented as a device stored in ram 241 .
- difference test block 519 In parallel to the color code translation, a difference test is performed on each block of pixels stored in frame buffer 0 503 , and frame buffer 1 505 , comparing each block to the corresponding block of the previous frame.
- the noise filter and difference test shown as difference test block 519 , accomplishes this comparison using the current raw frame buffer, in this example raw frame buffer 0 503 , and compare frame buffer 521 stores the pixel values of what is displayed on the user's screen.
- Difference test block 519 is fully illustrated in FIG. 7 .
- the second temporal redundancy check is performed. This process used in performing the second temporal redundancy check begins in CRC compute block 523 by computing the cyclical redundancy check (CRC) for all blocks that have changed.
- CRC cyclical redundancy check
- Cyclic redundancy check is a method known in the art for producing a checksum or hash code of a particular block of data.
- the CRCs can be computed for two blocks of data and then compared. If the CRCs match, the blocks are the same. Thus, CRCs are commonly used to check for errors.
- a CRC will be appended to a block of transmitted data so that the receiver can verify that the correct data is received.
- the CRC is used to compare a block of data with blocks of data stored in a cache.
- CRC compute block 523 the CRC is computed for each block of data that has changed.
- the array of CRCs is stored in CRC array buffer 525 .
- FIG. 5B depicted is an overview of the second temporal redundancy check and the lossless compression of a full frame of video.
- Wait block 527 waits for the frame buffer and the CRC array to be finished.
- a new frame of video will be received, as seen in FIG. 5A and the second temporal check will return to wait block 527 until a full frame of video is received.
- Wait block 527 , new video mode check 529 , and invalidate block 531 are methods that can be implemented as devices by microprocessor 223 .
- a new video mode can be declared, if for example, a new local computer, with different bandwidth or color requirements connects to the remote computer.
- a new video mode can also be declared if the bandwidth requirements of the current local computer change.
- new CRC block 533 uses CRC buffer array 525 and cell info array 535 .
- Cell info array 535 stores the cached blocks and the CRCs of the cache blocks and can be implemented as a device in ram 241 .
- New CRC block 533 is a device that can be implemented in microprocessor 223 . It also stores the current state of each block to indicate when the block was last updated.
- Cache hit check 537 implemented in microprocessor 223 computes whether a current block is located within the cache. If it is, the cell is marked as complete, or updated, in send cache hit block 539 . This process of checking and marking as updated is completed for all blocks in the image, and can be implemented in microprocessor 223 .
- Compute update block 541 checks for incomplete cells, or cells that need to be updated. All cells that need to be updated are combined to for an update rectangle. The update rectangle is compressed and sent to the client. In the decompression stage, the client can use the update rectangle, along with cache hit messages to reconstruct the video to be displayed. If there is nothing to update (if the video has not changed between frames) then update check 543 sends the algorithm back to wait block 527 . Thus the current frame will not be sent to the client. By eliminating the retransmission of a current frame of video, the algorithm saves on the necessary bandwidth necessary for transmitting the video.
- the update rectangle is first compressed.
- the method of compression is lossless.
- One example of a lossless black and white compression is the JBIG compression method disclosed by IBM.
- the compression method of the present invention is designed for color images.
- bit-slice block 545 the image must be divided into bit slices.
- a bit slice of the image is constructed by taking the same bit from each pixel of an image.
- the image uses 8-bit pixels, it can be deconstructed into 8 bit slices.
- the resulting bit slices are stored in bit-slice buffer 547 .
- compute update block 541 , update check 543 , and bit-slice block 545 are all methods that can be implemented as part of compression device 103 by using microprocessor 223 .
- Each bit slice is sent separately to the compression portion of the algorithm shown as compressor block 549 .
- JBIG compression is performed on each block and sent to server stack block 417 by compress and transmit block 551 .
- the JBIG compression method of compress and transmit block 549 is implemented in JBIG compression device 229 . Since JBIG is designed to operate on bi-level black and white images, the color video output of the monitor is sent to the compressor as separate bit slices. When the video is fully compressed it is sent to the client via NIC 223 .
- time check 553 will wait until 300 ms have passed since the previous frame capture before returning the algorithm to wait block 527 .
- the nearest color match function 509 that selectively converts less frequently occurring colors to more frequently occurring colors by mapping the less frequently occurring colors to the color-coded representation of the more frequently occurring colors.
- Nearest color match function 509 operates on one block of the images stored in raw frame buffer 0 503 and raw frame buffer 1 505 at a time.
- grab block 600 is used to extract a block of pixels from the image stored in raw frame buffer 0 503 and raw frame buffer 1 505 .
- raw frame buffer 0 503 is used to extract one block of pixels in grab block 600 .
- the extracted block is 64 by 32 pixels.
- the method can function on blocks of any size.
- nearest color match function 509 is to eliminate noise in a block of pixels introduced by the A/D conversion. This is accomplished by converting less frequently occurring pixel values to similar more frequently occurring pixel values. This is done primarily through histogram analysis and difference calculations.
- Nearest color match function 509 generates a histogram of pixel values which are stored in histogram generation block 601 .
- the histogram measures the frequency of each pixel value in the block of pixels extracted by grabbing block 600 .
- the histogram is sorted, such that a list of frequently occurring colors, popular color list 603 , and a list of least frequently occurring colors, rare color list 605 , are generated.
- the threshold for each list is adjustable.
- the compression analyzes each low frequently occurring pixel to determine if the pixel should be mapped to a value that occurs often.
- grab next rare color block 607 picks a pixel value from rare color list 605 and compares it to a high frequency color pixel extracted by grab next popular color block 609 .
- the distance between the low frequency pixel value and the high frequency pixel value is computed in compute distance block 611 .
- distance is a metric computed by comparing the separate red, green and blue values of the two pixels.
- the distance metric, “D,” can be computed in a variety of ways.
- One such example of a distance metric is as follows:
- R 1 is the red value of the low frequency pixel
- R 2 is the red value of the high frequency pixel
- G 1 is the green value of the low frequency pixel
- G 2 is the green value of the high frequency pixel
- B 1 is the blue value of the low frequency pixel
- B 2 is the blue value of the high frequency pixel.
- This formula yields a distance metric, D, which is how different the color values are between a low frequently occurring pixel, and a high frequently occurring pixel.
- D a distance metric
- the goal of the algorithm is to find the high frequently occurring pixel that yields the lowest D for the current low frequently occurring pixel. Therefore, a compare is done in closest distance check 613 , for each D that is computed. Every time a D is computed that is lower than any other previous D, an update is completed by update closest distance block 615 .
- a computation in threshold check 619 is performed to see if the lowest occurring D is within a predefined threshold. If this D is within the threshold, color code table 513 is updated by update color map block 621 mapping the low frequently occurring pixel to the color code value of the high frequently occurring pixel that yielded this D value. This process is repeated for all low frequency pixels and color code table 513 is updated accordingly.
- the process operates on every block in the image.
- Current pixel block 700 contains one block of pixels from the raw frame buffer.
- Previous pixel block 701 contains the corresponding block of pixels from compare frame buffer 521 .
- the process begins by extracting corresponding pixel values for one pixel from the current pixel block 700 and previous pixel block 701 .
- the pixels are stored in get next pixel block 703 .
- the pixel values are then compared using a distance metric.
- the distance metric is computed in distance metric block 705 using the following formula:
- R 1 , G 1 , and B 1 are the red, green and blue values respectively of the frame buffer pixel.
- R 2 , G 2 , and B 2 are the red, green and blue values respectively for the compare frame buffer pixel.
- the distance metric, D is compared with a noise tolerance threshold in noise threshold check 707 . If D is greater than the noise threshold, it is added to a running sum stored in accumulation block 709 . If the two pixels differ by less than this threshold, the difference is considered to be noise, or insignificant, and thus it is not part of the accumulation. This process enables efficient filtering of noise using a block-by-block comparison.
- This process of computing distances and summing values greater than a predefined threshold to a running total continues until the last pixel of the block is reached as determined by last pixel check 711 . Once the last pixel is reached, the running total is compared with a second threshold, the block threshold, in cell threshold check 713 . If the running total is greater than block threshold, the current block from raw frame buffer 0 503 is considered different than the one in compare frame buffer 521 . Otherwise, the two are considered close enough to be considered the same.
- a procedure is run as shown in new pixel block 715 .
- a flag is set indicating that the particular block has changed so that it will be transmitted to local computer 111 .
- compare frame buffer 521 is updated with the block of pixels to be transmitted.
- the block is considered to be unchanged from the previous block, and in no pixel change block 721 a flag is set to indicate that this block does not have to be transmitted to the local computer 111 .
- the second check for temporal redundancy can be performed on the blocks that have changed since the previous transmission.
- FIG. 7B is used to illustrate the two level thresholding operation on a sample block.
- 8 ⁇ 8 pixel block sizes are used. Each pixel is given a value between 0 and 255 as is common in the art. 0 represents a black pixel 255 represents a white pixel, and intermediate values represent shades of gray.
- Second frame compare buffer 751 is a block of pixels from the previously transmitted frame. Since second frame compare buffer 751 has pixels with value 0, second frame compare buffer 751 represents an area that is all black. Previous pixel 752 is the upper leftmost pixel of second frame compare buffer 751 .
- first frame buffer 753 a small white object, such as a white cursor, enters the area of the screen represented by second frame compare buffer 751 .
- This is represented in first frame buffer 753 .
- first frame buffer 753 a majority of the pixels are black, however the upper left pixel is white.
- First frame buffer 753 represents the same spatial area of the video as second frame compare buffer 751 , just one frame later.
- current pixel 754 is the same pixel as previous pixel 752 again, just one frame later.
- the white cursor is represented by current pixel 754 .
- current pixel 754 has a pixel value of 255.
- previous black pixel 755 is now current gray pixel 756 .
- current gray pixel 756 has a value of two.
- the “pixel threshold” is 10
- the “cell threshold” is 200.
- the two-level thresholding algorithm is performed between first frame buffer 753 , and second frame compare buffer 751 .
- the difference between previous pixel 752 , and current pixel 754 is added to the running sum because the difference (255) exceeds the “cell threshold.”
- the difference between previous black pixel 755 and current gray pixel 756 is not added to the sum because that difference (2) does not exceed the cell threshold.
- the running total will therefore equal 255. Since this total is greater than the cell threshold of 200, the block is considered to have changed.
- This example illustrates the advantages of the two-level threshold. The noise that entered into current frame 753 was ignored, but at the same time, the real change was recognized.
- FIG. 8 illustrates the overall decompression method.
- the process begins by waiting for a message in wait for message block 801 .
- the message is received from local communication device 126 and stored in an area readable by the decompression method.
- messages are transmitted using the TCP/IP protocol.
- TCP/IP stack 803 When a message is received from the compression device it will be stored locally in TCP/IP stack 803 .
- Wait for message block 801 imports this message from TCP/IP Stack 803 .
- Other embodiments may use a protocol other than TCP/IP, however the functionality of the present invention does not change.
- the message received by wait for message block 801 contains either compressed video data or a flag indicating that the updated frame of video is stored in cache.
- cache hit decision block 805 analysis of the message is performed to determine if the updated video is stored in the cache. If the updated video is in the cache, the image can be reconstructed from data already stored locally. This reconstruction occurs in cache copy block 807 where data is transferred from the cache to a frame buffer holding data representing the most up-to-date video.
- decompression of the transmitted video occurs in decompress block 809 .
- the preferred embodiment uses JBIG as the lossless compression technique. Therefore, the decompression of the video frame must occur on one bit plane of data at a time. After each bit plane is decompressed it is merged with the rest of the bit planes stored in the frame buffer. This merging occurs in merge block 811 . Once the full frame buffer is constructed the display on the local computer is updated as seen in update display block 813 .
- the display on the local computer can be updated after each bit plane is received. A user does not have to wait on receiving the whole frame of video before it displays on the screen. This method is useful if the bandwidth available for video transmission varies. This progressive transmission is one advantage of using JBIG over other compression methods.
- FIG. 9 further illustrates the decompression method disclosed in FIG. 8 .
- the method begins with wait for message block 801 . It then makes a series of three decisions.
- the first seen in new video mode message check 901 determines whether the message is a new video mode message.
- a new video mode message can be sent for a variety of reasons, including a bandwidth change, a change in screen resolution, or color depth, or a new client. This list is not meant to limit the reasons for sending a new video mode message, but instead to give examples of why it may occur.
- the decompression device notifies application 903 .
- Application 903 is the program running on the local computer that executes the operations of the decompression device.
- Application 903 interfaces with the input/output of local computer 111 . Any updates in data must therefore be sent to application 903 . Once application 903 is notified, the decompression device enters free buffer block 907 . Free buffer block 907 frees all buffers including any memory devoted to storing previously transmitted frames. The decompression method then restarts to wait for message block 801 , waiting for a message from compression device 103 .
- a new video mode message was not sent, the message is checked to see if it indicates the current frame of video is stored in cache. This check is seen in cache hit decision block 805 . If the decompression method determines that the message does indicate a cache hit, it will update merge frame buffer 909 with data from cache frame buffer 913 , as seen in notify application layer block 915 . Merge frame buffer 909 contains the most up-to-date data indicating what should be displayed on the local monitor. Cache frame buffer 913 , stores the same recently transmitted frames in cache that are stored on the compression device. Thus, if a “cache hit” message is received by the decompression device, the video data needed to complete the update of merge frame buffer 909 , with data from cache frame buffer 913 . Copy block 914 receives cache frame buffer 913 data as input and outputs this data to merge frame buffer 909 .
- application 903 receives data from merge frame buffer 909 and translates the data into a pixel format that can be displayed on the screen.
- Application copy block 919 completes this translation and sends the data in current screen pixel format to an update frame buffer 921 which is an area of memory that can be read by display 923 .
- Display 923 may include a video card, memory, and any additional hardware and software commonly used for video monitors.
- the decompression method confirms that the message contains compressed data in compressed data message decision block 925 . If there is no compressed data the algorithm restarts at wait for message block 801 . Otherwise, the data is decompressed into bit slice buffers in decompress data block 927 . If the JBIG compression algorithm is used, the data has been divided into bit slices when compressed. Therefore, the first step in the decompression of said data is to divide it into those bit slices and decompress each bit slice. As each bit slice is decompressed, it is stored in bit slice frame buffer 929 and then combined with the previous bit slices via an “OR” type operation completed in “OR” block 931 .
- end of field decision block 933 calculates whether all of the data from one field of the current frame has been received. If a full field has been received, then the decompression method notifies application 903 in notify application layer block 915 . Again, like with a cache hit, the notification allows the application to read from merge frame buffer 909 . The data from merge frame buffer 909 is converted into current screen pixel format in application copy block 919 and transmitted to the update frame buffer 921 . The data in update frame buffer 921 is used in display 923 . If end of field decision block 933 determines that the full field has not arrived, the method returns to wait for message block 801 to wait for the rest of the message.
- a second check in decision block 935 is performed to see if the field is the last field included in the message. If it is, the cache is updated by update cache block 941 . Otherwise, the method continues to wait on more data from the compression device in wait for message block 801 . In update cache block 937 new data overwrites older data in the cache. This keeps the cache up-to-date and synchronized with the compression device cache.
- the system After the completion of the cache update, the system returns to the wait for message from server block 801 . This process continues so long as the compression device sends frames of video.
- FIG. 10 illustrated is an alternative embodiment in which the outputs of 4-input 4-output compression switch 1001 are connected to 42 port Paragon KVM switch 1003 via four compression user stations 1005 .
- 4-input 4-output compression switch 1001 utilizes the compression methods of the present invention within a 4-input 4-output KVM switch.
- 42-port Paragon KVM switch 1003 is a KVM switch with 4 inputs and 42 outputs. In this configuration there can be up to four local computers 111 .
- Each compression user station 1005 receives one output of 4-input 4-output compression switch 1001 , and sends the output to the input of 42-port Paragon KVM Switch 1003 .
- a compression device in this case, 4-input 4-output compression switch 1001 , can control 108 total servers of which 28 are remote sun workstations 1011 , and the other 80 are remote PC Servers 1013 .
- FIG. 11 illustrates an alternate configuration of the present invention in which 8 local computers control 256 servers.
- three 32-channel KVM switches 1017 are used in a two-level configuration.
- the first level 32-channel KVM switch 1017 is used as the input to the other two 32-channel KVM switches 1017 .
- each remote server 1015 has a user console 1019 that accepts input from 32-channel KVM switch 1017 and converts the input into a readable form for each remote server 1015 .
- the output from each 4-input, 4-output compression switch 1001 is sent to compression user stations 1005 to convert this output into a form readable by 32-channel KVM switch 1017 .
- FIG. 12 illustrates an alternate configuration wherein 8 local computers control 1024 remote servers.
- this configuration there are two 4-input 4-output compression switches 1001 used in conjunction with three levels of 32-channel KVM switches 1017 .
- each remote server 1015 has a user console 1019 capable of accepting input from 32-channel KVM switch 1017 , and outputs to remote server 1015 . Further, the output from each 4-input 4-output switch 1001 is sent to compression user stations 1005 .
- FIG. 13 illustrates an example of an alternate embodiment of the present invention wherein 16 local computers control 256 remote servers.
- This configuration shows how, with a combination of the present invention and KVM switches, remote computers can be controlled locally, or at the remote location itself.
- FIG. 13 there is a 16-input 16-output KVM switch 1021 , with inputs connected to a combination of local computers 111 , and remote controlling computer 1023 .
- the local computers 111 connect to the remote servers 1015 , via 4-input 4-output compression switch 1001 , and compression user station 1005 .
- the outputs of the 16-input 16-output KVM switch are sent to a combination of remote servers 1015 , and remote servers 1015 connected to additional 16-input 16-output KVM switches 1021 .
- remote servers 1015 that can be controlled by the local computers 111 , and the remote controlling computer 1023 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
- The present invention is directed generally to the field of the compression and digitization of analog video. More particularly, this invention relates to a method of effectively digitizing and compressing the video output of a computer such that it may be monitored and controlled from a remote location.
- The trend towards distributed computing, coupled with the pervasiveness of the Internet, has led to a decentralization of resources, such as files and programs, for users and system administrators. As this trend of decentralization continues, user information and data has the potential of being stored on servers and computers remotely located all over the world. As this decentralization expands, system administrators have the task of monitoring and updating computers spread over great distances. The task of monitoring and maintaining these computers is physically trying, if not impossible without a method of easily accessing and controlling the remotely located computers.
- To this end, hardware and software solutions have been developed which allow users to access and control computers remotely. Early solutions included software programs that allowed text based control of remotely located computers. An example of this would be a user running a telnet program on a simple Windows-based computer to access files and run programs on a UNIX server. In this implementation, a telnet server or “daemon” is installed and started on the UNIX server. The daemon continually runs on the machine searching for and responding to new requests. A user wishing to access information on that machine starts a telnet client program which allows the user to issue a request to the daemon. After verification of the user's identity, the user has access to all of or a portion of the information on the accessed remote computer. The method is useful, but in many instances has limitations and many drawbacks.
- For example, in a Windows-based computer with a telnet operation, the telnet access is dependent upon the server not crashing and continually running the telnet daemon. If the server fails, crashes, or stops this daemon, a system administrator must physically restart the remote computer or the daemon on-site. Thus, this scheme is reliant on both a robust server and a robust daemon. Furthermore, the telnet programs are normally limited to text.
- More advanced software programs have been developed that allow for graphical user interfaces and greater degrees of control. Examples include Windows® XP® remote desktop, and common PCAnywhere® programs. In many of these solutions, the user can control and view the remote computer, as if it were local, with full control of the mouse and keyboard. However, like the telnet scheme, these solutions rely on software running on both the client computer and the server computer device. Specifically, the server has a daemon program similar to the daemon used in the telnet scheme. If the daemon fails, the local computer will lose control of the remote computer. Like the telnet solution, these graphical solutions still rely on software and are thus faced with substantial limitations.
- Another major drawback of these software solutions is the consumption of processing power on the remote computer. Specifically, the daemon program requires resources such as memory and microprocessor execution time from the server. In addition, once the connection is established, these solutions normally use the remote computer's existing modem or Internet connection. Thus, these software solutions consume a substantial portion of the bandwidth available to the server. Both the bandwidth consumption and the power consumption can severely degrade the performance of the server.
- In addition, the server software does not allow the system administrator full access to the remote computer at all times. For example, while the computer is rebooting and starting the operating system, the daemon program is not running. Therefore, the system administrator does not have access to the server during these periods. This is a major pitfall especially if the system administrator wishes to view or edit BIOS settings or view the server restart.
- To avoid the aforementioned pitfalls of these software solutions, system administrators use hardware solutions which are less reliant on the remote server in order to function. For example, keyboard, video, and mouse (“KVM”) switches have been developed that allow a single keyboard, video, and mouse to control multiple computers. The computers are often remotely located from the user or system administrator's computer (i.e., the local computer). These switches route the keyboard and mouse signals of the user computer to one of the remotely located computers chosen by the user. Similarly, the video output of the chosen computer is routed to the attached local monitor. Generally the user is able to switch to any of a series of remote computers.
- A KVM switch is useful for many reasons. For example, if a user has many computers, and wants to save space or cost by eliminating extra mice, keyboards, and monitors for each remote computers. The cost and space saving technique is very practical in many environments including server-farms and web-hosting facilities where space constraints are crucial.
- Additional hardware solutions include intermediate routers and cables that increase the distance that may separate a user and a remote computer. These solutions can also increase the number of computers a user may control with one keyboard, monitor, and mouse. However this network is separate from existing LANs and Internet connections and may be hampered by a distance limitation.
- The KVM switches have advantages over software solutions because they are not reliant upon the remote computer to function. If a system administrator needs to control and view a computer during “boot up” or to fix a problem with BIOS, the user can accomplish this via a remote keyboard, mouse and monitor linked via a KVM switch. Conversely, this would not be possible with a software solution.
- Further, the KVM switch does not use processing power on the remote computer. From the point of view of both the controlled computer and the local computer, it is as if the video, mouse and keyboard are directly connected to the remote computer. Thus, no additional resources on the host computer are consumed.
- Further, it is easier to make KVM switches that are operating system and machine independent. As long as the KVM ports are compatible with the keyboard, video and mouse connections, and with the output/input ports of the target computer, any KVM switch can be used, regardless of the operating system. With software solutions, a separate version of the software is generally needed if the user must control a variety of computers with a variety of operating systems.
- Although KVM switches greatly improve the control of remote units, generally KVM switches rely on direct connections for sending signals from the host computer to the keyboard, video, and mouse that degrade over distances. For example, after a certain distance, the signal degradation affects the quality of the video signal transmitted. Therefore, if a system administrator or user needs access to a computer, the user still has to be within a certain distance of the computer.
- In order to circumvent this transmission quality degradation over extended distances a KVM switch whereby the keyboard, video, and mouse signals are sent over standard Internet protocols or telephone connections maybe utilized. This allows any Internet or modem enabled device with a keyboard, video and mouse to control a remote computer regardless of the physical distance between a user computer and a remote device.
- However, it has been proven in the art that the creation of such a system is much more difficult to implement than a direct wired KVM switch. In order to send video, keyboard, and monitor signals using a protocol such as those used on the Internet (e.g. TCP/IP, UDB) such analog signals must first be converted to digital signals. The digital signals, in uncompressed form, require a large bandwidth to be transmitted in near real-time. Generally, even high-speed connections such as cable and DSL are incapable of accommodating such bandwidth requirements. Furthermore, a majority of home users still connect to the Internet via a modem with further bandwidth limitations. Therefore, in order for such a device to be useful in these situations, the analog outputs of conventional monitors must be both converted to a digital signal and compressed.
- Video compression takes advantage of the redundancies in video signals, both between successive frames of video, and within each individual frame. The transmission of a video signal from a computer monitor output generally has large amounts of both spatial and interframe redundancies. For example, in a near idle computer, the only change between successive frames of video might be the blinking of a cursor. Even as a user types a document, a majority of the screen does not change over periods of time.
- Existing video compression standards are designed for common video applications. Generally, these compression systems are inappropriate for KVM switch application, since these systems do not take into account specific KVM architecture. There exists a need in the art for a specialized KVM-specific algorithm capable of taking advantage of temporal redundancy, yet still capable of transmitting changes without a large loss of information.
- Further, most forms of video compression known in the art require complicated calculations. For example, the MPEG standards use the discrete cosine transform as part of the compression algorithm. This standard relies on the recognition of “motion” between frames to calculate motion vectors to describe how portions of the image are affected over a period of time. These calculations are complicated and require a combination of expensive hardware, or result in extended transmission periods due to increased computation time.
- Finally, many of the existing video compression techniques are lossy (i.e. they reduce the amount of information transmitted in order to reduce bandwidth). Typically, such lossy techniques either reduce the detail of an image or reduce the number of colors. Although reducing colors could be part of an adequate compression solution for computer monitor output, excessive reduction of images may yield a poor video transmission resulting in an illegible video reproduction. For example, if a computer user were attempting to use a word processor, reducing detail could lead to blurry or illegible text.
- The field of compression and digitization of computer video through a video switch has seen explosive development over the years allowing the transfer of video data over extended distances at increased speed of transfer. For example, in a primitive form, in 1992 and 1993, Apple Computer developed a technology whereby one computer was controlled by another computer via emulation of keyboard and mouse protocols. This technology was implemented as part of a computer on a card “product.” The product consisted of a full computer developed on a single card that was designed to directly plug into a standard Macintosh computer. This Macintosh computer controlled the computer on a card via the keyboard and mouse emulation technologies. However, the video out from the computer on a card was routed to the Macintosh display in analog form. A digitization or compression method was not implemented, nor was a means for transmission of the video over great distances.
- Other known methods in the art provide systems for converting VGA output to NTSC video. Such products (for example, TView Gold from Focus Enhancements of Campbell Calif.) allowed a computer's output to be viewed on a standard television. Over the years, numerous products have incorporated such technology whereby the output from a PC was digitized and displayed on a television screen. These products allowed the PC to be controlled via keyboard and mouse emulation. The user inputted keyboard and mouse signals into the apparatus, which manipulated and routed the signals to the PC. Although the system digitized video signals from a PC and completed some analysis to determine the size of the video, no compression methods were implemented.
- Other products known in the art exist that convert video images from a Macintosh computer to a NTSC video output for display on a television screen. Generally, these products are cards that plug directly into a specific platform such as a Macintosh computer and are only capable of operating with this type of system. A common example of this product is called an L-TV. The L-TV product is designed such that it can read directly from the video memory of the Macintosh computer. In addition, some video compression techniques are used in the L-TV product such that only portions of the image that change between frames are retransmitted. However, by reading directly from video memory, L-TV only functions with a Macintosh computer. Other advances in the art are development of software based simulation systems.
- Several patents are directed to the filed of compression and digitization of computer video signals. In addition, in certain instances, some of these systems operate in an environment of a user computer controlling a remote computer.
- For example, Widergren U.S. Pat. No. 4,302,775 discloses a method for comparing sub blocks of an image between successive frames of video and only encoding the differences between the blocks for transmission. In Widergren, the block-by-block comparisons are completed in the transform domain. Thus the system requires extra computations necessary to compute the transform of the image thereby increasing the time necessary to complete the video compression. In order to obviate the problem and reduce transmission times, the disclosure of Widergren requires faster or more complex hardware. The present invention improves upon these time consuming extra computations by completing the block comparisons in the spatial domain. For example, the present invention utilizes a two-level thresholding method to ensure that the block comparisons are effective.
- Santamäki et al. U.S. Pat. No. 4,717,957 teaches a method of caching previously occurring frames to decrease the necessary bandwidth for transmission of video. The process disclosed compares pixels from previous frames and only retransmits the changes between the pixels. Art disclosed before Santamäki compared a current frame of video with only the previous frame. Santamäki teaches a method that improves on previously existing art by adding a reference memory which may be used to store more than just the previous frame. Therefore, Santamäki teaches a method where the size of the cache is increased, thereby increasing the likelihood that a new frame of video will not have to be retransmitted.
- The present invention improves on this disclosure by using two separate methods of storing previous frames and comparing the current frame of video with the previous frame. Furthermore, the present invention improves upon the efficiency of the cache comparisons by comparing the cyclic redundancy check for each block being compared.
- Carr et al. U.S. Pat. No. 5,008,747 discloses a method for block-by-block comparison of sequential frames of video. Only changed pixels are retransmitted between frames of video. Carr et al. teaches a method whereby the changed pixels are stored in a matrix which is vector-quantized to one of a standard set of matrices. Thus Carr et al, discloses a video compression technique that uses temporal redundancies to reduce the data that must be transmitted. However, Carr et al. fails to disclose a method and apparatus capable of providing a reduced-time transmission of video. Further, Carr et al. fails to disclose a method of quantizing pixels before comparing frames. Thus the disclosures of Carr et al. would not be suited for remotely controlling a computer because it fails to teach methods that take into account noise that maybe introduced into the video through digitization errors.
- Astle U.S. Pat. No. 5,552,832 discloses a camera that receives analog video signals and converts said signals to digital signals by implementing a microprocessor that divides the signals into blocks. The blocks of video are then classified and run-length encoded. Thus Astle discloses a video compression method that operates on blocks of pixels within an image. The present invention improves upon the compression techniques disclosed by taking advantage of temporal redundancies between images. Further, the present invention increases redundancy through noise elimination and a color lookup table.
- Perholtz et al. U.S. Pat. No. 5,732,212 discloses a method for digitizing video signals for manipulation and transmission. The patent discloses a method whereby video raster signals from the data processing device are analyzed to determine the information displayed on a video display monitor attached to the data processing device. Perholtz et al. teaches a method for digitizing and compressing video. However, the method compresses video by analyzing the content of the video and sending said content. Thus in general, Perholtz does not teach a method in which the full graphical interface is displayed to the user. The present invention improves upon the disclosure of Perholtz by providing an improved graphical interface to the user. Further, the present invention improves upon this disclosure by compressing the video based upon spatial and temporal redundancies.
- Frederick U.S. Pat. No. 5,757,424 discloses a system for high-resolution video conferencing without extreme demands on bandwidth. The system disclosed creates a mosaic image by sampling portions of a scene and combining those samples. This system allows for the transmission of video over low bandwidths. Frederick's system in general is used within a camera for transmitting video. Though Frederick teaches a way to reduce the data necessary for transmission, Frederick does not teach methods for comparing frames of video. In addition, Frederick does not teach a system whereby the video that must be sent is compressed using lossless compression. The present invention overcomes the limitations of Frederick's disclosures by using lossless compression in the spatial domain and two temporal redundancy checks. Further, the present invention teaches video compression in the context of controlling a remote computer rather than in the context of transmitting video from a camera.
- Schneider U.S. Pat. No. 6,304,895 discloses a system for intelligently controlling a remotely located computer. Schneider further discloses a method of interframe block comparison where pixel values that even slightly change are retransmitted. This necessarily leads to retransmission of noisy pixels unnecessarily. In another embodiment, Schneider will retransmit an entire block if a threshold percentage of pixels within the block have changed.
- For example, if all pixels in the current frame change from black to a dark gray due to noise introduced by the A/D conversion, all pixels will also be retransmitted unnecessarily because the total percentage (i.e. 100% of the pixels) would clearly exceed any predetermined percentage threshold. Schneider also fails to take into account legitimate changes. For example, an intended change to only a few pixels, e.g., 5 pixels, will be missed if the threshold is set to 6 pixels.
- The present disclosure overcomes these shortcomings by recognizing minor changes due to noise, by implementing a more efficient calculation method and with a cache capable of storing previous blocks. Furthermore, the present disclosure recognizes significant changes (i.e. a pixel changing from black to white due to a cursor). In addition, slight color variations will be smoothed due to the color code and noise reduction methods of the present invention.
- Pinkston U.S. Pat. No. 6,378,009 teaches a method of sending control, status and security functions over a network such as the Internet from one computer to another. Although Pinkston discloses a switching system that packetizes remote signals for the Internet, no video compression methods or conversions are disclosed. Instead, Pinkston teaches a method whereby a system administrator can access a KVM switch remotely over the Internet and control the switch. Therefore, in and of itself, Pinkston's disclosures would riot allow a remote computer to be operated over a low-bandwidth connection.
- The digitization of a video signal and its subsequent compression allows a computer to be controlled remotely using standard Internet protocols. The compression allows an interface to utilize digital encryption techniques known in the art. Non-digital KVM switches, in transmitting analog signals, do not allow or interface well with digital encryption schemes, such as 128-bit encryption. If a computer with sensitive information needs to be controlled from a remote location, there needs to be protection from potential hackers or competitors.
- Therefore, what is needed is an Internet, LAN/WAN, or dial-up enabled KVM switch that allows for near real time transmission of compressed video. The compression must be efficient enough to transmit video in near real-time over modem bandwidths. However, the compression must not be too lossy, because the resulting image must be discernible. Finally, the KVM switch should work across multiple platforms (e.g. Macintosh, IBM compatible, and UNIX). Therefore, the switch cannot take advantage of platform dependent GUI calls, or similar system dependent codes which indicate when and where updates in the video are needed.
- Based on the aforementioned disclosures and related technologies in the art, it is clear that there exists a need for a video compression method designed specifically for remotely monitoring and controlling a computer that is accurate and virtually provided in real-time. Furthermore, there exists a need in the art that allows for platform independent monitoring of computers, even at limited bandwidths provided by standard modem connections.
- Most systems employed in the art for compressing and digitizing video signals fail to efficiently transmit synchronized video data. Therefore, the present disclosure provides an improved video compression algorithm that offers efficient bandwidth usage and accurate video transmission. The present invention is directed to keyboard, video, and mouse control systems. The disclosure relates to a method and device for the digitization and compression of video signals such that the signal is transmitted via a modem, Internet connection, LAN/WAN, etc. The present invention includes a corresponding decompression technique that allows video signals to be displayed on a monitor. More particularly, in the preferred embodiment, this compression technique allows for the viewing of a remote computer's video output on a local video output device such as a monitor. Furthermore, the invention can be interfaced with a KVM switch so that multiple remote computers can be controlled and monitored.
- In the present invention, the keyboard and mouse signals are transmitted over standard modem and Internet connections synchronized with the video transmission. In the preferred embodiment, the video signal is transmitted from a remote computer to a local computer whereas the keyboard and mouse signals are transmitted from the local computer to the remote computer.
- The present invention allows for platform independent communication between computers. Thus, the local computer can control one or more remote computers utilizing a variety of computer platforms, including, but not limited to Windows, Mac, Sun, DEC, Alpha, SGI, IBM 360, regardless of the operating system of the local computer.
- The present invention may be used to control a remote serial terminal device, such as a printer, fax machine, etc. In the preferred embodiment, a serial terminal device can be connected directly to the present invention or through a serial concentrator and can be controlled from the local application. In another embodiment, the serial concentrator is linked with the keyboard, video, and mouse.
- Accordingly, the device uses compression techniques that have been designed to improve video transfer times for video having characteristics exhibited by computer monitor output. The compression can be accomplished using readily available hardware providing a viable device that would allow a remote computer to be controlled via a local keyboard, video and monitor equipped computer, so long as the remote device and the local keyboard, video and monitor can communicate via the Internet, a direct modem connection, or a LAN/WAN etc.
- Since the system allows for platform independent communications, the video compression does not use operating system specific hooks, nor does the compression employ platform specific GDI calls. Instead, the algorithms take advantage of spatial and temporal redundancies in the video. In the first step of the video compression method, analog video is sent to an A/D converter. The digitization of the analog video is necessary in order for the video to be transmitted using an Internet protocol. However, a detrimental side effect of the digitization process is the introduction of quantization errors and noise into the video.
- Therefore, the next step in the present invention is to eliminate the A/D conversion noise via histogram analysis. This noise elimination is done by first dividing a frame of video into logical two-dimensional blocks of pixels. Many different sizes of blocks may be used, for example 8×8 pixels, 32×32 pixels, 64×32 pixels, etc. Different block sizes may be used depending on the size of the entire image, the bandwidth of the connection, etc. After the image is divided into blocks, the noise reduction algorithm is completed on each block separately.
- For each block, a histogram of pixel values is created and sorted by frequency so that it is possible to identify how often each pixel value occurs. Less frequent pixel values are compared to more frequently occurring pixel values. If the less frequently occurring pixels are close in pixel value to the more frequently occurring pixel values, color values are mapped to the closest high frequency pixel value. To determine how close pixel values are, a distance metric is used based on the red, green, and blue (“RGB”) components of each pixel. In alternative embodiments, a similar distance metric can be used, based on the appropriate components of the pixel for that embodiment.
- The purpose of the noise reduction algorithm is to increase the redundancy in an image by eliminating the superfluous noise introduced by the A/D converter. For example, suppose an 8×8 pixel block size is used and the algorithm is operating on this particular block. Further, assume that of the 64 pixels in the current block, 59 are blue, 4 are red, and 1 is a light blue. In this example, a low frequency threshold is defined as any pixel values that occur less than 5 times and a high frequency threshold is defined as any pixel value that occurs more than 25 times within a block. In general, pixel values between these thresholds are ignored for the noise reduction analysis. Therefore, the algorithm determines that the 4 red pixels and the 1 light blue pixel occur rarely, and therefore might be noisy.
- In the next step, the 4 red pixels and the 1 light-blue pixel are compared with the more frequent pixel values (i.e. in this case the blue value). In this step, a pre-determined distance-threshold is used. If the distance between the less frequent pixel and the more frequent pixel is within this distance-threshold, then the less frequent pixel value is converted to the more frequent pixel value.
- In this example, it is likely that the light-blue pixel is close enough in value to the blue pixel. Thus, the light-blue pixel is mapped to the blue pixel. Though the red pixels occur rarely, the distance in value between the red pixel value and the blue pixel value is large enough so that the red pixels are not converted.
- Further disclosed is an efficient method which integrates the aforementioned method of pixel conversion with a color conversion via a color “look-up table.” By integrating the pixel conversion methods and the look-up table, both noise elimination and efficient compression can be accomplished simultaneously.
- It is commonly known in the art that one method of compressing color video is to use fewer bits to represent each pixel. For example, a common video standard uses 8 bits to represent the red component of video, 8 bits to represent the green component of video, and 8 bits to represent the blue component of video. This representation is commonly referred to as an “RGB” representation. If only the four most significant bits from the red, green, and blue components of the video are used instead of all 8-bits, the total amount of data used to represent the video is reduced by 50 percent.
- The present invention uses a more intelligent method of converting an RGB representation of pixels into a compact representation. The method and apparatus of the present invention uses a color look-up table that maps a specific RGB value to a more compact form. Both the compression device and the decompression device use the same look-up table. Further, different look-up tables can be used depending on bandwidth availability, the capabilities of the local display device, etc.
- In the present invention, the color look-up table is used to implement the noise reduction color conversion. In the histogram analysis, a map of RGB values to color code values is created. If a less frequently occurring pixel value needs to be adjusted to a similar more frequent color, this is accomplished through the use of the color lookup table. The less frequently occurring color is mapped to the same color code as the highly frequent occurring color. Thus, the noise is efficiently removed from each block, while at the same time, the number of bits used to represent each pixel is reduced.
- In addition to the methods of noise reduction, improved methods of interframe block comparison are disclosed. Specifically, temporal redundancy is identified and reduced, thereby limiting the bandwidth necessary for transmission of the remote computer video output. There are two methods disclosed for completing the interframe compression. In both methods, each frame or image is delineated into a block of pixels, and compared with the corresponding block of pixels from previously transmitted images. Different embodiments can use one or both of these methods depending on the level of compression desired.
- General technologies in the art employ compression systems that are highly susceptible to error and noise. For example, in certain known systems, the current frame of video is compared with the previously transmitted frame of video. Only portions of the image that have changed from the last frame to the current frame are transmitted. Methods to accomplish this are known in the art in which pixels between frames are simply compared for equality. Areas that are no longer the same are retransmitted. Generally, these compression systems are highly susceptible to noise during the analog to digital conversion and create inefficient retransmission of video. For example, if prior art methods were used for retransmitting the image, then often large portions of the image would be resent unnecessarily due to the small error in the image as a result of noise created during the A/d conversion.
- To overcome this pitfall, the present invention uses a unique two-level thresholding method to determine if areas of the frame have changed. The present invention uses two frame buffers as input. The first is the newly captured frame buffer. The second is the compare frame buffer. The compare frame buffer contains the image data from previously captured frame buffers.
- The algorithm divides each of these frame buffers into blocks of pixels. Any block size may be used including 8×8, 32×32, 64×32 etc, as well as other irregular block sizes. Different block sizes may be used depending on bandwidth requirements, image size, desired compression yields, etc.
- The algorithm processes one block of pixels at a time. For each pixel, the algorithm computes the difference between the color components of the current frame buffer pixel and the compare frame buffer pixel. From this, a distance value is computed. This process is done for each pixel in the block.
- Each of these distance values is compared with a “pixel threshold.” If the distance value exceeds the pixel threshold, the amount it exceeds the threshold by is added to a distance sum. This running sum is calculated based on various equations for all pixels in the block.
- The distance sum is then compared with a “cell threshold.” If the distance sum exceeds the cell threshold, then the block of pixels is considered changed in comparison to the previous block. If the block of pixels has changed, the compare frame buffer will be updated with this new block. Further, this new block will be sent in a compressed format to the local user.
- If the distance sum is not greater than the cell threshold, the block is considered to be unchanged. Neither the compare frame buffer, nor the local user's screen is updated.
- This algorithm is ideal for locating areas of change in that it can detect a large change in a few pixels or a small change in a large number of pixels. The method proves more efficient and accurate as compared to an algorithm that simply counts the number of changed pixels in a cell. With such an algorithm, if a very few pixels within the cell changed drastically (for example, from black to white), the algorithm would still consider the cell to be unchanged since the overall summation would not exceed a low threshold. This will often lead to display errors in the transmission of computer video.
- Consider, for example, if a user were editing a document. If the user were to change a letter, such as an “E” to an “F,” only a few pixels would change in a video representation of that change. However, the result exhibited by these few pixels would be dramatic. A percentage threshold algorithm would not register this change leading to a display error. A percentage threshold algorithm, by only looking at the number of pixels within a block that have changed, generally fails at recognizing a case in which a few pixels change a lot. However, the present invention, by virtue of the two-level thresholding method and apparatus recognizes that the block of pixels has significantly changed between frames of video.
- The second temporal compression method relies on a cache of previously transmitted frames. An identical cache is synchronized between the remote device and the user's local computer. Like the previous temporal redundancy check, this second check is performed on a block of pixels within a frame. Again, any block size may be used, for example, 8×8, 16×16, 32×32 or 64×32.
- The cache check begins whenever a cell changes. The cache check compares the current block with corresponding blocks from previous frames. The cache can store an arbitrarily large number of previous frames. A higher percentage of cache hits is more likely to occur with larger cache sizes. However, the memory and hardware requirements increase with an increase in cache size. Further, the number of comparisons, and thus the processing power requirements, also increase with a larger cache size. A “cache hit” is defined as locating a matching block in the cache. A “cache miss” is defined as not finding the current block in the cache.
- Whenever a “cache hit” occurs, the new block does not have to be retransmitted. Instead, a message and a cache entry ID can be sent to the local computer. Generally, this message and entry ID will consume less bandwidth than retransmitting an entire block.
- If a “cache miss” occurs, the new block is retransmitted. Further, both the remote and local devices update the cache, by storing the block within the cache. Since the cache is of limited size, older data is overwritten. One skilled in the art would know there exists various algorithms that can be used in deciding which older data should be overwritten. For example, a simple algorithm can be employed to overwrite the oldest block within the cache. The oldest block can be defined as the least recently transmitted block.
- In order to search for a cache hit, the new block must be compared with all corresponding blocks located within the cache. There are several ways in which the new block can be compared with the previous blocks located within the cache. In the preferred embodiment, a cyclic redundancy check (“CRC”) is computed for the new block and all corresponding blocks. The CRC is similar to a hash code for the block. A hash code is a smaller, yet unique representation of a larger data source. Thus, if the CRCs are unique, the cache check process can compare CRCs for a match instead of comparing the whole block. If the CRC of the current block matches the CRC of any of the blocks in the cache a “cache hit” has been found. Because the CRC is a smaller representation of the block, less processing power is needed for comparing CRCs. Further, it is possible to construct a cache in which only the CRCs of blocks are stored on the remote side. Thus, using a CRC comparison saves memory and processor time.
- In alternative embodiments, a similar hash code or checksum can be used. Alternatively an algorithm similar to the one used in the first temporal redundancy check can be applied to the cache check. Generally, such an algorithm can be less susceptible to noise.
- Other disclosed methods of video compression generally transmit only pixel values that change. For example, a method and apparatus can retransmit a difference frame, whereby only the changes between the current frame and the previous frame are transmitted. Typically, these methods of transmitting difference frames can cause frequent synchronization errors. Further, by retransmitting on a block level, less addressing is potentially needed in determining where each block is located within an image than if the decision to retransmit is performed at pixel granularity.
- Once the image block-by-block comparison is performed, in the preferred embodiment, each block that must be transmitted is first compressed. In the preferred embodiment, the blocks are compressed using the Joint Bi-level Image Group (JBIG) lossless compression technique.
- JBIG is lossless and was designed for black and white images, such as those transmitted by facsimile machines. However, the present invention compresses and transmits color images. Therefore, in order to utilize the JBIG compression technique, the color image must be bit-sliced and the subsequent bit-planes must be compressed separately.
- A bit-slice of a color image is created by grabbing the same bit from each pixel across the whole image. The color look-up table uses a compact form in which the most significant bits of each pixel are stored first, and the lesser significant bits are stored last. Thus, the first bit planes will contain the most significant data and the last bit-planes will contain the least significant data.
- By combining the JBIG compression technique with the color-lookup-table, the method and apparatus compresses and transmits the most significant bits of the frame first. Thus, the local computer will receive video from the remote computer progressively, receiving and displaying the most significant bits of the image before receiving the remaining bits. Such a method is less sensitive to changes in bandwidth and will allow a user to see the frame of video as it is transmitted, rather than waiting for all details of the frame to be sent.
- In an alternate embodiment, the device is also capable of calibrating the analog to digital conversion automatically “on the fly” so that the whole range of digital values is used. For example, if the device is supposed to transmit values between 0 and 255 (i.e., general pixel depth values), but instead only transmits values between 10 and 245, it will dynamically adjust the gain of the A/D converter to take advantage of the full range of digital values. This adjustment can be done for the red, green and blue components on an individual basis or a cumulative basis. By adjusting this range, the user receives more accurate representations of the video.
- Further disclosed is a decompression method and apparatus used to receive data from the compression device and convert the data so that it may be displayed on the user's local display device. The decompression device includes a device capable of bi-directional digital communications. Using this communication device, the decompression device is able to receive video data from the compression device and transmit keyboard and mouse data. In an alternate embodiment, the decompression device also includes a means to control a serial device by transmitting serial data. Thus, the decompression device enables a local user to control a remote computer using a local keyboard, video, mouse, and serial device.
- The decompression device reconstructs frames of video based on the messages received from the compression device. Thus, the decompression device contains a frame buffer with the most up-to-date video data. The data in the frame buffer is sent to a display device so that the user can view the data from the remote computer.
- The image in the frame buffer is constructed using a combination of data from a cache and transmitted data from the remote device. The remote device indicates what areas of the remote computer video yielded “cache hits” and what areas are retransmitted. The decompression device constructs the frame buffer based on these indications.
- In addition, further disclosed is a cache that remains synchronized with the cache on the compression device. Thus, whenever the decompression method receives new video data, the cache is updated. Both the compression device and the decompression device use the same method for updating the cache by overwriting older data.
- The compression device sends video data that has been compressed using a lossless compression algorithm such as JBIG. Therefore, further disclosed is a method and apparatus which reverses this lossless compression. This decompression method and apparatus recognizes the changed areas of the image based on flags transmitted by the compression device. From this information, the decompression technique reconstructs the full frame of video.
- In addition, the frame of video is converted to a format that may be displayed on the local video monitor by reversing the color-table conversion. The decompression method is able to send the raw frame of video to the operating system, memory, or other location such that it may be received and displayed by the monitor.
- Therefore, the decompression device, like the compression device stores a local copy of the color-code table. The device can then convert the data from the remote computer into a standard RGB format for display on the local monitor.
- The decompression method can be implemented in a variety of ways. For example, in one embodiment, it is implemented as a software application that can be run in, for example, the Windows OS on an Intel Pentium powered PC. In an alternate embodiment, the decompression technique can be implemented such that it may run within a web browser such as Internet Explorer or Netscape® Navigator®. Such an embodiment would be more user friendly, therefore reducing the need for the installation of additional software on the local computer. Finally, in yet another embodiment, the decompression can be implemented in a device composed of a microprocessor and memory. Such an embodiment would further limit the necessary software stored on the local machine.
- Since the present invention is used for controlling a remote computer from great distances, there is a need to ensure that the transmission of the video signals is secure. If not, there exists the potential that hackers or competitors could view or control a user's computer. Therefore, the present invention was designed to easily integrate with digital encryption techniques known in the art. In one embodiment of the invention, a 128-bit encryption technique is used both to verify the identity of the user and to encrypt and decrypt the video stream transmission. A 128-bit public key RSA encryption technique is used to verify the user, and 128-bit RC4 private key encryption is used for the video streams.
- In the preferred embodiment, this video compression apparatus and method is used to allow a local computer access to a remote computer. However, the compression and device is not limited to such an embodiment, and can be applied to future needs for the transmission of similar types of video in near real-time over low bandwidths.
- The objects described, and further objects will become readily apparent to one skilled in the art upon review of the following description, figures and claims.
- A further understanding of the present invention can be obtained by reference to the preferred embodiment as well as some alternate embodiments set forth in the illustrations of the accompanying drawings. Although the illustrated embodiments are merely exemplary of systems for carrying out the present invention, the organization, expanded configurations and method of operation of the invention, in general, together with further objectives and advantages thereof, may be more easily understood by reference to the drawings and the following description. The drawings are not intended to limit the scope of the invention, which is set forth with particularity in the claims as appended or as subsequently amended, but merely to clarify and exemplify the invention.
- For a more complete understanding of the present invention, reference is now made to the following drawings in which:
-
FIG. 1A illustrates an overview of the preferred embodiment of the present invention in which the video compression method and apparatus are utilized between a local computer controlled by a remote computer, so long as both are connected via an agreed upon protocol. -
FIG. 1B illustrates an alternate embodiment, in which the compression device is combined with a KVM switch, such that a local user can control one of many remote computers. -
FIG. 2 depicts a block diagram of the preferred embodiment of the compression device including hardware used to interface with the remote computer and the communications device of digitizing and compression signals of the present invention. -
FIG. 3A depicts a block diagram of one embodiment of the decompression device, whereby all decompression is done in software on a local computer. -
FIG. 3B depicts a block diagram of an alternate embodiment of the decompression device, in which the decompression apparatus is a separate hardware device. -
FIG. 4 illustrates a flowchart depicting an overview of the video compression algorithm. -
FIG. 5A depicts a more detailed flowchart of the compression algorithm showing and color-code table. -
FIG. 5B depicts a detail of a flowchart of the compression algorithm including how the cache testing and JBIG compression fit within the overall algorithm -
FIG. 6 depicts a flowchart of the nearest match function integrated with the color code table. -
FIG. 7 depicts a flowchart of the Noise Filter & Difference Test. -
FIG. 8 depicts an overview flowchart of the decompression method including integration with an application on the local computer. -
FIG. 9 depicts a more detailed flowchart of the decompression algorithm -
FIG. 10 illustrates an example of an alternate configuration of the present invention in which multiple inputs of four local computers in conjunction with KVM switches are utilized to control remote servers. -
FIG. 11 illustrates an alternate configuration of the present invention in which 8 local computers control 256 servers. -
FIG. 12 illustrates an alternate configuration wherein 8 local computers control 1024 remote servers. -
FIG. 13 illustrates an example of an alternate embodiment of the present invention wherein 16 local computers control 256 remote servers. - As required, a detailed illustrative embodiment of the present invention is disclosed herein. However, systems and operating structures in accordance with the present invention may be embodied in a wide variety of forms and modes, some of which may be quite different from those in the disclosed embodiment. Consequently, the specific structural and functional details disclosed herein are merely representative, yet in that regard, they are deemed to afford the best embodiment for purposes of disclosure and to provide a basis for the claims herein, which define the scope of the present invention. The following presents a detailed description of a preferred embodiment (as well as some alternative embodiments) of the present invention.
- Referring first to
FIG. 1A , represented is a block diagram of the preferred embodiment of the present invention including a computer system for accessing and controlling a remotely located computer system, the preferred embodiment in which the present invention would be used. The term “local” will be from the of the user who wishes to access a computer at a remote location. The term “remote” is at a different location from the user, and is accessible via the present invention. Therefore, the phrase “remote computer” refers to a computer with a direct connection to the apparatus of the present invention. For example inFIG. 1A video out ofremote computer 101 connects tocompression device 103 viastandard monitor connection 105. Similarly, keyboard input/output is connected viastandard keyboard connection 107 and the mouse input/output is connected tocompression device 103 viastandard mouse connection 109. - A user accesses
remote computer 101 vialocal computer 111.Local computer 111 is connected to monitor 113,keyboard 115, andmouse 117 viamonitor connection 119,keyboard connection 121, andmouse connection 123. In thepreferred embodiment monitor 113,keyboard 115, andmouse 117 are wired separately. Specifically, monitorconnection 119,keyboard connection 121, andmouse connection 123 consist of separate standard cables known in the art. However, any method of connectingmonitor 113,keyboard 115, and,mouse 117 tolocal computer 111 may be used with the present invention. For example, an alternative method is one in whichkeyboard 115 andmouse 117 connect tolocal computer 111 via a shared USB connection. In this embodiment,keyboard connection 121 andmouse connection 123 might be one physical cable. In another embodiment,keyboard 115 andmouse 117 can connect to local computer via a wireless connection. -
Compression device 103 includescommunication device 125, andlocal computer 111 includeslocal communication device 126, both of which are capable of bi-directional digital communication viacommunications path 127.Communication device 125 andlocal communication device 126 may include modems, network cards, wireless network cards, or any similar device capable of providing bi-directional digital communication. Similarly,communications path 127 may include a telephone, the Internet, a wireless connection, or any other similar device capable of providing bi-directional digital communication.Communication device 125 andlocal communication device 126 enablecompression device 103 andlocal computer 111 to communicate via any standard agreed upon protocol. Examples of these protocols include, but are not limited to, Transmission Control Protocol/Internet Protocol (TCP/IP), and User Datagram Protocol (UDP). -
Compression device 103 receives and analyzes the video signals fromremote computer 101 viastandard monitor connection 105.Compression device 103 analyzes and converts the video signal so that it may be packaged for transmission via a standard Internet protocol.Local computer 111 receives the transmissions fromcompression device 103 via the bi-directional communications provided bycommunication device 125,local communication device 126, andcommunications path 127 and translates the signal via a decompression technique corresponding to the compression techniques of the present invention. - In addition to receiving monitor signals from
compression device 103,local computer 111 receives signals fromkeyboard 115 andmouse 117 viakeyboard connection 121, andmouse connection 123. These signals are packaged on top of a standard Internet protocol, sent tolocal communication device 126 and transmitted tocommunication device 125 viacommunication path 127.Compression device 103 receives these signals fromcommunication device 125 and transmits them toremote computer 101 viastandard keyboard connection 107 andstandard mouse connection 109. By utilizing the aforementioned method of transmitting keyboard, mouse, and video signals, the present invention allows a user atlocal computer 111 to controlremote computer 101 as if the user were physically located atremote computer 101. -
FIG. 1B depicts an alternate embodiment of the present invention in whichcompression device 103 as depicted inFIG. 1A is combined withKVM switch 129. As shown inFIG. 1B ,local computer 111 is capable of controlling either of fourremote computers 101. In alternative embodiments,KVM switch 129 can control any series ofremote computers 101 in a similar manner. As can be seen,KVM switch 129 has fourstandard monitor connections 105, fourstandard keyboard connections 107, and fourstandard mouse ports 109. Using methods known in the art for controlling a switch such asKVM switch 129, the local user can switch control between each of the fourremote computers 101. -
FIG. 2 depicts a hardware diagram ofcompression device 103 of the preferred embodiment of the present invention.FIG. 2 is one embodiment in which the compression and digitization of the present invention may be implemented. One skilled in the art can readily recognize that there exist many other designs that could be used to implement the compression algorithms of the present invention. The first step in compressing the video is the conversion of the video from analog to digital, completed by A/D converter 201. A/D converter 201 receives analogred signal 203, analoggreen signal 205, analogblue signal 207,horizontal synch signal 209, andvertical synch signal 211.Clock 213 drives A/D converter 201 using means commonly employed in the art. The outputs of A/D converter 201 are shown as R-out 215, G-out 217, and B-out 219. In the preferred embodiment, these outputs are used to represent the red component, green component and blue component of the digitized signal respectively. A/D converter 201 outputs pixels (e.g. one pixel at a time) and the results are stored inpixel pusher 221.Pixel pusher 221 communicates withmicroprocessor 223 viacommunication bus 225.Pixel pusher 221 can also communicate withframe buffer 227 andJBIG Compression device 229 usingcommunication bus 225. -
Communication bus 225 is connected to networkinterface card 231 and dual universal asynchronous receiver transmitter (DUART) 233.DUART 233 interfaces withkeyboard port 235 andmouse port 237. Thus, A/D converter 301,keyboard port 235, andmouse port 237 allowcompression device 103 to interface withremote computer 101. Further,network interface card 231 allowscompression device 103 to interface withcommunication device 125.Compression device 103 receives analog video signals, output keyboard and mouse signals, and communicates withlocal computer 111 viacommunication device 125. Finally, by means ofJBIG compression device 219,microprocessor 223,flash 239, andrandom access memory 241,compression device 103 pictured inFIG. 2 can be programmed and configured to implement the video processing methods of the present invention disclosed herein. -
FIG. 3A illustrates decompression software 301 interacting withlocal computer 111.Local computer 111 runsoperating system 303 capable of receiving data fromlocal communication device 126 via operating system data link 305. Operating system data link 305 utilizes shared memory, a memory bus, or other device drivers.Local communication device 126 receives data fromcompression device 103 overcommunications path 127. When a user decides to operateremote computer 101,operating system 303 loads the decompression software 301 like any other process, from a computerreadable medium 307 via computer readable medium to operating system data link 309. Decompression software 301 then accesses the data received fromlocal communication device 126. Decompression software 301 is used to decompress data received fromlocal communication device 126 and convert the data into data that can be interpreted byvideo card 311. The data is then transmitted tovideo card 311 via operating system 301 where it is then transferred tovideo card 311 via operating system data link 313. - Similarly, decompression software 301 receives signals from
keyboard 115 via operating system's 303 operating system tokeyboard connection 315 which connects tokeyboard port 317. In a similar manner, decompression software 301 receives signals frommouse 117, via operating system's 303 operating system tomouse connection 319 tomouse port 321. - Though having the decompression completed in software is the preferred embodiment, it would be apparent to one skilled in the art, that such decompression could also be completed by means of a hardware solution. For example,
FIG. 3B shows adecompression device 323 that can accomplish the same decompression as decompression software 301. In this case,decompression device 323 replaceslocal computer 111 and further includeslocal communication device 126.Monitor 113,keyboard 115, andmouse 117 attaches todecompression device 323 via themonitor connection 119,keyboard connection 121, andmouse connection 123 viamonitor port 325,keyboard port 327, andmouse port 329 respectively. In this embodiment, the data frommonitor port 325,keyboard port 327, andmouse port 329 communicates withmemory 331 andmicroprocessor 333 to run the decompression methods of the present invention. - The decompression method receives data from
local communication device 126 and transmits a decompressed version of the data to monitor 119. In this embodiment, there exists a connection betweenlocal communication device 126 andmemory 335 and a connection betweenvideo port 325,keyboard port 327, andmouse port 329 withmemory 331. These connections enabledecompression device 323 to send data fromkeyboard port 327 andmouse port 329 tolocal communication device 126.Local communication device 126 transmits the data over thecompression link 127. These connections also enabledecompression device 323 to receive data fromlocal communication device 126 and transmit the data tovideo port 325. One skilled in the art will readily appreciate that there are any number of ways to implement such a configuration utilizing a combination of hardware and/or software. -
FIG. 4 depicts the function of the compression and digitization apparatus of the present invention. The decompression method is implemented bycompression device 103, which connects withcommunication device 125 andstandard monitor connection 105 ofremote computer 101. The compression process begins atcapture image block 401 where data is captured fromstandard monitor connection 105.Capture image block 401 is implemented indecompression device 103, bypixel pusher 221. The video is converted from VGA analog video to a digital representation of the signal.Pixel pusher 221 enablescapture image block 402 to grab the raw data and passes it to the frame buffers.Frame store block 402 is a method implemented by device frame buffers 227.Frame store block 402 stores a whole frame of video inframe buffer 227. - The resulting digital representation of the image is divided into a plurality of pixel blocks. The compression process is performed on each pixel block until the entire image has been compressed. The block size may be arbitrarily large, however, in the preferred embodiment the image is divided into blocks which are
pixels 64 by 32 pixels. - In
filter block 403, each block of pixels is filtered and translated from a RGB representation to a color code representation. The process offilter block 403 is implemented incompression device 103 bymicroprocessor 223. The filtering is designed to reduce the number of different colors present in each block by converting less frequently occurring colors to more frequently occurring colors. Noise introduced by the A/D converter distorts the pixel values of some pixels. The filtering recognizes pixels that are slightly distorted and adjusts these pixels to the correct value. Such filtering creates an image with greater redundancy, thus yielding higher compression ratios. - The filtering completed in
filter block 403 operates on one block of pixels at a time. The size of the block can vary based on bandwidth requirements, the size of the image, etc. - The filtering is implemented as part of the color code conversion process. The color code table is a commonly used compression method of representing colors using fewer bits than if kept in RGB format. By using fewer bits, less information must be transmitted with each frame, allowing video to be transmitted at lower bandwidths. In the present invention, a variety of color code tables may be used depending on the desired number of unique colors in the image, bandwidth restrictions, etc.
- The color code table uses the results of the noise filter to convert less frequently occurring pixel colors to more frequently occurring colors. The less frequently occurring pixel values are given the same color code representation as the more frequently occurring pixel values. Thus, the noise reduction and color code conversion is accomplished at the same time.
-
Compression device 103 keeps a cache of recently transmitted images. Such a cache can be implemented and stored inram 241. After noise elimination and image conversion, the compression process compares the most recent block with the corresponding block of pixels in recently transmitted images. This check is executed by “cache hit”check 405. The methods of “cache hit” check 405 are implemented incompression device 103 bymicroprocessor 223. If the most recently transmitted block is the same as the block stored in the cache, there is no need to retransmit the image. Instead, as noted in cache hit message block 407, a “cache hit” message is sent to the local computer, indicating that the most recently transmitted block is already stored in the cache. Cache hit message block 407 is also implemented incompression device 103 bymicroprocessor 223. - The next step in the process, update check 409, checks to see if the current block of pixels is similar to the corresponding block in the image most recently transmitted. This can also be implemented before “cache hit”
check 405, or in parallel with “cache hit”check 405. The main purpose of update check 409 is to check if the block has changed since the last frame. If the block has not changed, there is no need to send an updated block to the local computer. Otherwise, the block is prepared for compression inbit plane block 411. In the preferred embodiment, this update check 409 uses a different technique than the cache check. With two ways of checking for redundancy, higher compression can result. Both the methods of update cache check 409 and the methods ofbit plane block 411 are implemented incompression device 103 bymicroprocessor 223. - For any areas of the image that have changed, the cache is updated, and the data is compressed before being sent to the TCP/IP stack. In the preferred embodiment, the image is compressed using the IBM JBIG compression algorithm. JBIG is designed to compress black and white images. However, the image to be compressed is in color. Therefore, bit planes of the image are extracted in
bit plane block 411 and each bit plane is compressed separately bycompression block 413. Finally, the compressed image is sent to the local computer.JBIG compression device 229 implements sendcompressed message block 415. Send compressed message block 415 sends the compressed video toserver stack block 417.Server stack block 417, implemented onNIC 231 enables the compressed video to be sent tolocal communication device 126 using an Internet protocol (in this case TCP/IP). -
FIG. 5A andFIG. 5B provide detailed flowcharts of a preferred embodiment of the compression process. As seen inFIG. 5A , the video capture is done at a rate of 20 Frames per second inVGA capture block 501. VGA capture block is implemented bypixel pusher 221 which receives the output of the A/D conversion process. Standard monitors often update at refresh rates as high as 70 times per second. As a rate of 20 frames per second is significantly less frequent, this step limits the amount of data that is captured from the computer. Thus, this first step reduces the bandwidth needed to transmit the video. In this embodiment, the data is outputted in RGB format where 5 bits are allocated to each color. This allows for the representation of 32,768 unique colors. However, other formats capable of storing more or less colors may be used depending on the needs of the users and the total available bandwidth. After receiving the digitized signal,VGA capture block 501 transmits the raw data to framebuffer 0 503 andframe buffer 1 505. - A frame buffer is an area of memory capable of storing one frame of video. Two frame buffers allow faster caching of image data. Raw frames of video are alternatively stored in
frame buffer 0 503, andframe buffer 1 505. This allows the next frame of video to be captured even as compression is being performed on the previous frame of video. Incompression device 103,frame buffers 227 are a device that are capable of implementingframe buffer 0 503 andframe buffer 1 505. - The frame buffer that contains the most recent image is used as data for nearest
color match function 509 as is the data in color code from client data block 511. Color code from client data block 511 is stored inflash 239. Nearestcolor match function 509 is a method that can be implemented as a device bymicroprocessor 223. A detailed explanation of nearestcolor match function 509 is shown inFIG. 6 . - The resulting color code table 513 from nearest
color match function 509 is used forcolor code translation 515. The process translates the RGB representation of each pixel into a more compact form via this color code table translation. Color code table 513 is generated bynearest color match 509 and can be stored inram 241.Color code translation 515 translates a block of RGB values to their color code values and stores the result in codedframe buffer 517.Coded frame buffer 517 can also be implemented as a device stored inram 241. - In parallel to the color code translation, a difference test is performed on each block of pixels stored in
frame buffer 0 503, andframe buffer 1 505, comparing each block to the corresponding block of the previous frame. The noise filter and difference test, shown asdifference test block 519, accomplishes this comparison using the current raw frame buffer, in this exampleraw frame buffer 0 503, and compareframe buffer 521 stores the pixel values of what is displayed on the user's screen.Difference test block 519 is fully illustrated inFIG. 7 . - Once
difference test block 519 is complete, the second temporal redundancy check is performed. This process used in performing the second temporal redundancy check begins in CRC compute block 523 by computing the cyclical redundancy check (CRC) for all blocks that have changed. - Cyclic redundancy check (CRC) is a method known in the art for producing a checksum or hash code of a particular block of data. The CRCs can be computed for two blocks of data and then compared. If the CRCs match, the blocks are the same. Thus, CRCs are commonly used to check for errors. Often, a CRC will be appended to a block of transmitted data so that the receiver can verify that the correct data is received. However, in the present invention, the CRC is used to compare a block of data with blocks of data stored in a cache. Thus, in
CRC compute block 523, the CRC is computed for each block of data that has changed. The array of CRCs is stored inCRC array buffer 525. - Turning next to
FIG. 5B , depicted is an overview of the second temporal redundancy check and the lossless compression of a full frame of video. Waitblock 527 waits for the frame buffer and the CRC array to be finished. Next, a decision is made as to whether a new video mode has been declared, as seen innew video check 529. If a new video mode is declared, all data is invalidated in invalidateblock 531 and the algorithm starts again. A new frame of video will be received, as seen inFIG. 5A and the second temporal check will return to wait block 527 until a full frame of video is received. Waitblock 527, newvideo mode check 529, and invalidateblock 531 are methods that can be implemented as devices bymicroprocessor 223. - A new video mode can be declared, if for example, a new local computer, with different bandwidth or color requirements connects to the remote computer. A new video mode can also be declared if the bandwidth requirements of the current local computer change.
- If in
new video check 529 it is deemed that a new video mode has not been declared, then the comparison of the current block's CRC with the cached CRCs is performed innew CRC block 533. This block usesCRC buffer array 525 andcell info array 535.Cell info array 535 stores the cached blocks and the CRCs of the cache blocks and can be implemented as a device inram 241.New CRC block 533 is a device that can be implemented inmicroprocessor 223. It also stores the current state of each block to indicate when the block was last updated. - Cache hit
check 537, implemented inmicroprocessor 223 computes whether a current block is located within the cache. If it is, the cell is marked as complete, or updated, in send cache hitblock 539. This process of checking and marking as updated is completed for all blocks in the image, and can be implemented inmicroprocessor 223. -
Compute update block 541 checks for incomplete cells, or cells that need to be updated. All cells that need to be updated are combined to for an update rectangle. The update rectangle is compressed and sent to the client. In the decompression stage, the client can use the update rectangle, along with cache hit messages to reconstruct the video to be displayed. If there is nothing to update (if the video has not changed between frames) then updatecheck 543 sends the algorithm back to waitblock 527. Thus the current frame will not be sent to the client. By eliminating the retransmission of a current frame of video, the algorithm saves on the necessary bandwidth necessary for transmitting the video. - If however, there are areas of the image that need to be updated, the update rectangle is first compressed. In the preferred embodiment, the method of compression is lossless. One example of a lossless black and white compression is the JBIG compression method disclosed by IBM. However, the compression method of the present invention is designed for color images. Thus, as seen in bit-
slice block 545, the image must be divided into bit slices. A bit slice of the image is constructed by taking the same bit from each pixel of an image. Thus, if the image uses 8-bit pixels, it can be deconstructed into 8 bit slices. The resulting bit slices are stored in bit-slice buffer 547. Again, computeupdate block 541, update check 543, and bit-slice block 545, are all methods that can be implemented as part ofcompression device 103 by usingmicroprocessor 223. - Each bit slice is sent separately to the compression portion of the algorithm shown as
compressor block 549. In this case, JBIG compression is performed on each block and sent toserver stack block 417 by compress and transmitblock 551. The JBIG compression method of compress and transmitblock 549 is implemented inJBIG compression device 229. Since JBIG is designed to operate on bi-level black and white images, the color video output of the monitor is sent to the compressor as separate bit slices. When the video is fully compressed it is sent to the client viaNIC 223. - Since the preferred embodiment captures
frames 20 times a second, it is necessary to wait 300 ms between frame captures. Thus time check 553 will wait until 300 ms have passed since the previous frame capture before returning the algorithm to waitblock 527. - Referring now to
FIG. 6 , illustrated is the nearestcolor match function 509 that selectively converts less frequently occurring colors to more frequently occurring colors by mapping the less frequently occurring colors to the color-coded representation of the more frequently occurring colors. - Nearest
color match function 509 operates on one block of the images stored inraw frame buffer 0 503 andraw frame buffer 1 505 at a time. As seen inFIG. 6 , grabblock 600 is used to extract a block of pixels from the image stored inraw frame buffer 0 503 andraw frame buffer 1 505. In this case,raw frame buffer 0 503 is used to extract one block of pixels ingrab block 600. In the preferred embodiment of the present invention, the extracted block is 64 by 32 pixels. However, the method can function on blocks of any size. - The goal of nearest
color match function 509 is to eliminate noise in a block of pixels introduced by the A/D conversion. This is accomplished by converting less frequently occurring pixel values to similar more frequently occurring pixel values. This is done primarily through histogram analysis and difference calculations. - Nearest
color match function 509 generates a histogram of pixel values which are stored inhistogram generation block 601. The histogram measures the frequency of each pixel value in the block of pixels extracted by grabbingblock 600. The histogram is sorted, such that a list of frequently occurring colors,popular color list 603, and a list of least frequently occurring colors,rare color list 605, are generated. The threshold for each list is adjustable. - The compression analyzes each low frequently occurring pixel to determine if the pixel should be mapped to a value that occurs often. First, grab next
rare color block 607 picks a pixel value fromrare color list 605 and compares it to a high frequency color pixel extracted by grab nextpopular color block 609. The distance between the low frequency pixel value and the high frequency pixel value is computed incompute distance block 611. In this process, distance is a metric computed by comparing the separate red, green and blue values of the two pixels. The distance metric, “D,” can be computed in a variety of ways. One such example of a distance metric is as follows: -
D=(R2−R1)̂2+(G2−G1)̂2+(B2−B1)̂2 - In this formula, R1 is the red value of the low frequency pixel, R2 is the red value of the high frequency pixel, G1 is the green value of the low frequency pixel, G2 is the green value of the high frequency pixel, B1 is the blue value of the low frequency pixel, and B2 is the blue value of the high frequency pixel.
- This formula yields a distance metric, D, which is how different the color values are between a low frequently occurring pixel, and a high frequently occurring pixel. The goal of the algorithm is to find the high frequently occurring pixel that yields the lowest D for the current low frequently occurring pixel. Therefore, a compare is done in
closest distance check 613, for each D that is computed. Every time a D is computed that is lower than any other previous D, an update is completed by updateclosest distance block 615. - Once all high frequently occurring pixels are compared as determined by done
check 617, a computation inthreshold check 619 is performed to see if the lowest occurring D is within a predefined threshold. If this D is within the threshold, color code table 513 is updated by updatecolor map block 621 mapping the low frequently occurring pixel to the color code value of the high frequently occurring pixel that yielded this D value. This process is repeated for all low frequency pixels and color code table 513 is updated accordingly. - Next referring to
FIG. 7 , illustrated is the first temporal redundancy process used indifference test block 519. This process operates on every block in the image.Current pixel block 700 contains one block of pixels from the raw frame buffer.Previous pixel block 701 contains the corresponding block of pixels from compareframe buffer 521. The process begins by extracting corresponding pixel values for one pixel from thecurrent pixel block 700 andprevious pixel block 701. The pixels are stored in getnext pixel block 703. The pixel values are then compared using a distance metric. In the preferred embodiment, the distance metric is computed in distancemetric block 705 using the following formula: -
D=(R1−R2)̂2+(G1−G2)̂2+(B1−B2)̂2 - As before, R1, G1, and B1 are the red, green and blue values respectively of the frame buffer pixel. Similarly, R2, G2, and B2 are the red, green and blue values respectively for the compare frame buffer pixel.
- Next, the distance metric, D, is compared with a noise tolerance threshold in
noise threshold check 707. If D is greater than the noise threshold, it is added to a running sum stored inaccumulation block 709. If the two pixels differ by less than this threshold, the difference is considered to be noise, or insignificant, and thus it is not part of the accumulation. This process enables efficient filtering of noise using a block-by-block comparison. - This process of computing distances and summing values greater than a predefined threshold to a running total continues until the last pixel of the block is reached as determined by
last pixel check 711. Once the last pixel is reached, the running total is compared with a second threshold, the block threshold, incell threshold check 713. If the running total is greater than block threshold, the current block fromraw frame buffer 0 503 is considered different than the one in compareframe buffer 521. Otherwise, the two are considered close enough to be considered the same. - If the running total exceeds the threshold, a procedure is run as shown in
new pixel block 715. A flag is set indicating that the particular block has changed so that it will be transmitted tolocal computer 111. Further, as seen inFIG. 7 , compareframe buffer 521 is updated with the block of pixels to be transmitted. - If the running total does not exceed the threshold, the block is considered to be unchanged from the previous block, and in no pixel change block 721 a flag is set to indicate that this block does not have to be transmitted to the
local computer 111. At this point, the second check for temporal redundancy can be performed on the blocks that have changed since the previous transmission. -
FIG. 7B is used to illustrate the two level thresholding operation on a sample block. For purposes of this disclosure, 8×8 pixel block sizes are used. Each pixel is given a value between 0 and 255 as is common in the art. 0 represents ablack pixel 255 represents a white pixel, and intermediate values represent shades of gray. Second frame comparebuffer 751 is a block of pixels from the previously transmitted frame. Since second frame comparebuffer 751 has pixels withvalue 0, second frame comparebuffer 751 represents an area that is all black.Previous pixel 752 is the upper leftmost pixel of second frame comparebuffer 751. - To simplify, suppose that a small white object, such as a white cursor, enters the area of the screen represented by second frame compare
buffer 751. This is represented infirst frame buffer 753. In first frame buffer 753 a majority of the pixels are black, however the upper left pixel is white.First frame buffer 753 represents the same spatial area of the video as second frame comparebuffer 751, just one frame later. Herecurrent pixel 754 is the same pixel asprevious pixel 752 again, just one frame later. Infirst frame buffer 753 the white cursor is represented bycurrent pixel 754. As a result,current pixel 754 has a pixel value of 255. - Further suppose that noise has been introduced by the A/D converter, such that previous
black pixel 755 is now currentgray pixel 756. Thus, while previousblack pixel 755 has a value of zero, currentgray pixel 756 has a value of two. - Further suppose that in this example the “pixel threshold” is 10, and the “cell threshold” is 200. The two-level thresholding algorithm is performed between
first frame buffer 753, and second frame comparebuffer 751. In computing the running sum of differences, the difference betweenprevious pixel 752, andcurrent pixel 754 is added to the running sum because the difference (255) exceeds the “cell threshold.” However, the difference between previousblack pixel 755 and currentgray pixel 756 is not added to the sum because that difference (2) does not exceed the cell threshold. - The running total will therefore equal 255. Since this total is greater than the cell threshold of 200, the block is considered to have changed. This example illustrates the advantages of the two-level threshold. The noise that entered into
current frame 753 was ignored, but at the same time, the real change was recognized. -
FIG. 8 illustrates the overall decompression method. The process begins by waiting for a message in wait formessage block 801. The message is received fromlocal communication device 126 and stored in an area readable by the decompression method. In this embodiment, messages are transmitted using the TCP/IP protocol. When a message is received from the compression device it will be stored locally in TCP/IP stack 803. Wait for message block 801 imports this message from TCP/IP Stack 803. Other embodiments may use a protocol other than TCP/IP, however the functionality of the present invention does not change. - The message received by wait for message block 801 contains either compressed video data or a flag indicating that the updated frame of video is stored in cache. In cache hit
decision block 805 analysis of the message is performed to determine if the updated video is stored in the cache. If the updated video is in the cache, the image can be reconstructed from data already stored locally. This reconstruction occurs in cache copy block 807 where data is transferred from the cache to a frame buffer holding data representing the most up-to-date video. - If the transmitted message indicates that the updated video is not in the cache, then decompression of the transmitted video occurs in
decompress block 809. As described in the compression figures, the preferred embodiment uses JBIG as the lossless compression technique. Therefore, the decompression of the video frame must occur on one bit plane of data at a time. After each bit plane is decompressed it is merged with the rest of the bit planes stored in the frame buffer. This merging occurs inmerge block 811. Once the full frame buffer is constructed the display on the local computer is updated as seen inupdate display block 813. - In an alternate embodiment, the display on the local computer can be updated after each bit plane is received. A user does not have to wait on receiving the whole frame of video before it displays on the screen. This method is useful if the bandwidth available for video transmission varies. This progressive transmission is one advantage of using JBIG over other compression methods.
-
FIG. 9 further illustrates the decompression method disclosed inFIG. 8 . The method begins with wait formessage block 801. It then makes a series of three decisions. The first seen in new video mode message check 901, determines whether the message is a new video mode message. A new video mode message can be sent for a variety of reasons, including a bandwidth change, a change in screen resolution, or color depth, or a new client. This list is not meant to limit the reasons for sending a new video mode message, but instead to give examples of why it may occur. If a new video mode message has been transmitted, the decompression device notifiesapplication 903.Application 903 is the program running on the local computer that executes the operations of the decompression device.Application 903 interfaces with the input/output oflocal computer 111. Any updates in data must therefore be sent toapplication 903. Onceapplication 903 is notified, the decompression device entersfree buffer block 907.Free buffer block 907 frees all buffers including any memory devoted to storing previously transmitted frames. The decompression method then restarts to wait for message block 801, waiting for a message fromcompression device 103. - If a new video mode message was not sent, the message is checked to see if it indicates the current frame of video is stored in cache. This check is seen in cache hit
decision block 805. If the decompression method determines that the message does indicate a cache hit, it will update mergeframe buffer 909 with data fromcache frame buffer 913, as seen in notifyapplication layer block 915. Mergeframe buffer 909 contains the most up-to-date data indicating what should be displayed on the local monitor.Cache frame buffer 913, stores the same recently transmitted frames in cache that are stored on the compression device. Thus, if a “cache hit” message is received by the decompression device, the video data needed to complete the update ofmerge frame buffer 909, with data fromcache frame buffer 913.Copy block 914 receivescache frame buffer 913 data as input and outputs this data to mergeframe buffer 909. - After the updating of
merge frame buffer 909, notifyapplication layer block 915 notifiesapplication 903 of new data. Inapplication copy block 919application 903 receives data frommerge frame buffer 909 and translates the data into a pixel format that can be displayed on the screen.Application copy block 919 completes this translation and sends the data in current screen pixel format to anupdate frame buffer 921 which is an area of memory that can be read bydisplay 923.Display 923 may include a video card, memory, and any additional hardware and software commonly used for video monitors. - If the message sent from the compression device does not contain a cache hit as determined by cache hit
decision block 805, then the decompression method confirms that the message contains compressed data in compressed datamessage decision block 925. If there is no compressed data the algorithm restarts at wait formessage block 801. Otherwise, the data is decompressed into bit slice buffers in decompress data block 927. If the JBIG compression algorithm is used, the data has been divided into bit slices when compressed. Therefore, the first step in the decompression of said data is to divide it into those bit slices and decompress each bit slice. As each bit slice is decompressed, it is stored in bitslice frame buffer 929 and then combined with the previous bit slices via an “OR” type operation completed in “OR”block 931. - Next, end of
field decision block 933 calculates whether all of the data from one field of the current frame has been received. If a full field has been received, then the decompression method notifiesapplication 903 in notifyapplication layer block 915. Again, like with a cache hit, the notification allows the application to read frommerge frame buffer 909. The data frommerge frame buffer 909 is converted into current screen pixel format inapplication copy block 919 and transmitted to theupdate frame buffer 921. The data inupdate frame buffer 921 is used indisplay 923. If end offield decision block 933 determines that the full field has not arrived, the method returns to wait for message block 801 to wait for the rest of the message. - Once the full field of video has been sent to the application level, a second check in
decision block 935 is performed to see if the field is the last field included in the message. If it is, the cache is updated byupdate cache block 941. Otherwise, the method continues to wait on more data from the compression device in wait formessage block 801. In update cache block 937 new data overwrites older data in the cache. This keeps the cache up-to-date and synchronized with the compression device cache. - After the completion of the cache update, the system returns to the wait for message from
server block 801. This process continues so long as the compression device sends frames of video. - Next turn to
FIG. 10 , illustrated is an alternative embodiment in which the outputs of 4-input 4-output compression switch 1001 are connected to 42 portParagon KVM switch 1003 via fourcompression user stations 1005. 4-input 4-output compression switch 1001 utilizes the compression methods of the present invention within a 4-input 4-output KVM switch. 42-portParagon KVM switch 1003 is a KVM switch with 4 inputs and 42 outputs. In this configuration there can be up to fourlocal computers 111. Eachcompression user station 1005 receives one output of 4-input 4-output compression switch 1001, and sends the output to the input of 42-portParagon KVM Switch 1003. Twenty eight outputs from 42-portParagon KVM Switch 1003 are connected to 28 Sun User Consoles 1007. The remaining outputs of 42-portParagon KVM Switch 1003 are connected to 20 PC User Consoles 1009. Each Sun User Consol 1007 is connected to aremote sun workstation 1011, while each PC User Console 1009 is connected to aremote PC Server 1013. Thus in this configuration, a compression device, in this case, 4-input 4-output compression switch 1001, can control 108 total servers of which 28 areremote sun workstations 1011, and the other 80 areremote PC Servers 1013. -
FIG. 11 illustrates an alternate configuration of the present invention in which 8 local computers control 256 servers. In this embodiment, three 32-channel KVM switches 1017 are used in a two-level configuration. The first level 32-channel KVM switch 1017 is used as the input to the other two 32-channel KVM switches 1017. As in other arrangements, eachremote server 1015 has a user console 1019 that accepts input from 32-channel KVM switch 1017 and converts the input into a readable form for eachremote server 1015. As in alternate embodiments, the output from each 4-input, 4-output compression switch 1001 is sent tocompression user stations 1005 to convert this output into a form readable by 32-channel KVM switch 1017. -
FIG. 12 illustrates an alternate configuration wherein 8 local computers control 1024 remote servers. In this configuration there are two 4-input 4-output compression switches 1001 used in conjunction with three levels of 32-channel KVM switches 1017. In sum, there are 42 32-channel KVM switches 1017. As with other configurations, eachremote server 1015 has a user console 1019 capable of accepting input from 32-channel KVM switch 1017, and outputs toremote server 1015. Further, the output from each 4-input 4-output switch 1001 is sent tocompression user stations 1005. -
FIG. 13 illustrates an example of an alternate embodiment of the present invention wherein 16 local computers control 256 remote servers. This configuration shows how, with a combination of the present invention and KVM switches, remote computers can be controlled locally, or at the remote location itself. InFIG. 13 , there is a 16-input 16-output KVM switch 1021, with inputs connected to a combination oflocal computers 111, and remotecontrolling computer 1023. As in other configurations, thelocal computers 111 connect to theremote servers 1015, via 4-input 4-output compression switch 1001, andcompression user station 1005. The outputs of the 16-input 16-output KVM switch are sent to a combination ofremote servers 1015, andremote servers 1015 connected to additional 16-input 16-output KVM switches 1021. In total, there are 268remote servers 1015 that can be controlled by thelocal computers 111, and the remotecontrolling computer 1023. - While the present invention has been described with reference to one or more preferred embodiments, which embodiments have been set forth in considerable detail for the purposes of making a complete disclosure of the invention, such embodiments are merely exemplary and are not intended to be limiting or represent an exhaustive enumeration of all aspects of the invention. The scope of the invention, therefore, shall be defined solely by the following claims. Further, it will be apparent to those of skill in the art that numerous changes may be made in such details without departing from the spirit and the principles of the invention.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/728,998 US20100225658A1 (en) | 2002-08-29 | 2010-03-22 | Method And Apparatus For Digitizing And Compressing Remote Video Signals |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/233,299 US7684483B2 (en) | 2002-08-29 | 2002-08-29 | Method and apparatus for digitizing and compressing remote video signals |
US12/728,998 US20100225658A1 (en) | 2002-08-29 | 2010-03-22 | Method And Apparatus For Digitizing And Compressing Remote Video Signals |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/233,299 Continuation US7684483B2 (en) | 2002-08-29 | 2002-08-29 | Method and apparatus for digitizing and compressing remote video signals |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100225658A1 true US20100225658A1 (en) | 2010-09-09 |
Family
ID=31977208
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/233,299 Active 2024-12-22 US7684483B2 (en) | 2002-08-29 | 2002-08-29 | Method and apparatus for digitizing and compressing remote video signals |
US12/728,998 Abandoned US20100225658A1 (en) | 2002-08-29 | 2010-03-22 | Method And Apparatus For Digitizing And Compressing Remote Video Signals |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/233,299 Active 2024-12-22 US7684483B2 (en) | 2002-08-29 | 2002-08-29 | Method and apparatus for digitizing and compressing remote video signals |
Country Status (1)
Country | Link |
---|---|
US (2) | US7684483B2 (en) |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050204026A1 (en) * | 2004-03-12 | 2005-09-15 | David Hoerl | Switchless KVM network with wireless technology |
US20090238284A1 (en) * | 2008-03-18 | 2009-09-24 | Auratechnic, Inc. | Reducing Differentials In Visual Media |
US20130136173A1 (en) * | 2011-11-15 | 2013-05-30 | Panasonic Corporation | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US9036662B1 (en) * | 2005-09-29 | 2015-05-19 | Silver Peak Systems, Inc. | Compressing packet data |
US9092342B2 (en) | 2007-07-05 | 2015-07-28 | Silver Peak Systems, Inc. | Pre-fetching data into a memory |
US9130991B2 (en) | 2011-10-14 | 2015-09-08 | Silver Peak Systems, Inc. | Processing data packets in performance enhancing proxy (PEP) environment |
US9143455B1 (en) | 2008-07-03 | 2015-09-22 | Silver Peak Systems, Inc. | Quality of service using multiple flows |
US9152574B2 (en) | 2007-07-05 | 2015-10-06 | Silver Peak Systems, Inc. | Identification of non-sequential data stored in memory |
US9191342B2 (en) | 2006-08-02 | 2015-11-17 | Silver Peak Systems, Inc. | Data matching using flow based packet data storage |
US9363248B1 (en) | 2005-08-12 | 2016-06-07 | Silver Peak Systems, Inc. | Data encryption in a network memory architecture for providing data based on local accessibility |
US9363309B2 (en) | 2005-09-29 | 2016-06-07 | Silver Peak Systems, Inc. | Systems and methods for compressing packet data by predicting subsequent data |
US9584403B2 (en) | 2006-08-02 | 2017-02-28 | Silver Peak Systems, Inc. | Communications scheduler |
US9613071B1 (en) | 2007-11-30 | 2017-04-04 | Silver Peak Systems, Inc. | Deferred data storage |
US9626224B2 (en) | 2011-11-03 | 2017-04-18 | Silver Peak Systems, Inc. | Optimizing available computing resources within a virtual environment |
US9712463B1 (en) | 2005-09-29 | 2017-07-18 | Silver Peak Systems, Inc. | Workload optimization in a wide area network utilizing virtual switches |
US9717021B2 (en) | 2008-07-03 | 2017-07-25 | Silver Peak Systems, Inc. | Virtual network overlay |
US9875344B1 (en) | 2014-09-05 | 2018-01-23 | Silver Peak Systems, Inc. | Dynamic monitoring and authorization of an optimization device |
US9948496B1 (en) | 2014-07-30 | 2018-04-17 | Silver Peak Systems, Inc. | Determining a transit appliance for data traffic to a software service |
US9967056B1 (en) | 2016-08-19 | 2018-05-08 | Silver Peak Systems, Inc. | Forward packet recovery with constrained overhead |
US10164861B2 (en) | 2015-12-28 | 2018-12-25 | Silver Peak Systems, Inc. | Dynamic monitoring and visualization for network health characteristics |
US10257082B2 (en) | 2017-02-06 | 2019-04-09 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows |
WO2019099140A1 (en) * | 2017-11-20 | 2019-05-23 | ASG Technologies Group, Inc. dba ASG Technologies | Publication of applications using server-side virtual screen change capture |
US10432484B2 (en) | 2016-06-13 | 2019-10-01 | Silver Peak Systems, Inc. | Aggregating select network traffic statistics |
US10637721B2 (en) | 2018-03-12 | 2020-04-28 | Silver Peak Systems, Inc. | Detecting path break conditions while minimizing network overhead |
CN111600779A (en) * | 2020-06-24 | 2020-08-28 | 厦门长江电子科技有限公司 | Test platform compatible with various switches |
US10771394B2 (en) | 2017-02-06 | 2020-09-08 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows on a first packet from DNS data |
US10805840B2 (en) | 2008-07-03 | 2020-10-13 | Silver Peak Systems, Inc. | Data transmission via a virtual wide area network overlay |
US10812611B2 (en) | 2017-12-29 | 2020-10-20 | Asg Technologies Group, Inc. | Platform-independent application publishing to a personalized front-end interface by encapsulating published content into a container |
US10877740B2 (en) | 2017-12-29 | 2020-12-29 | Asg Technologies Group, Inc. | Dynamically deploying a component in an application |
US10892978B2 (en) | 2017-02-06 | 2021-01-12 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows from first packet data |
US11044202B2 (en) | 2017-02-06 | 2021-06-22 | Silver Peak Systems, Inc. | Multi-level learning for predicting and classifying traffic flows from first packet data |
US11055067B2 (en) | 2019-10-18 | 2021-07-06 | Asg Technologies Group, Inc. | Unified digital automation platform |
US11086751B2 (en) | 2016-03-16 | 2021-08-10 | Asg Technologies Group, Inc. | Intelligent metadata management and data lineage tracing |
US11212210B2 (en) | 2017-09-21 | 2021-12-28 | Silver Peak Systems, Inc. | Selective route exporting using source type |
US11269660B2 (en) | 2019-10-18 | 2022-03-08 | Asg Technologies Group, Inc. | Methods and systems for integrated development environment editor support with a single code base |
US11611633B2 (en) | 2017-12-29 | 2023-03-21 | Asg Technologies Group, Inc. | Systems and methods for platform-independent application publishing to a front-end interface |
US11693982B2 (en) | 2019-10-18 | 2023-07-04 | Asg Technologies Group, Inc. | Systems for secure enterprise-wide fine-grained role-based access control of organizational assets |
US11762634B2 (en) | 2019-06-28 | 2023-09-19 | Asg Technologies Group, Inc. | Systems and methods for seamlessly integrating multiple products by using a common visual modeler |
US11847040B2 (en) | 2016-03-16 | 2023-12-19 | Asg Technologies Group, Inc. | Systems and methods for detecting data alteration from source to target |
US11849330B2 (en) | 2020-10-13 | 2023-12-19 | Asg Technologies Group, Inc. | Geolocation-based policy rules |
US11886397B2 (en) | 2019-10-18 | 2024-01-30 | Asg Technologies Group, Inc. | Multi-faceted trust system |
US11941137B2 (en) | 2019-10-18 | 2024-03-26 | Asg Technologies Group, Inc. | Use of multi-faceted trust scores for decision making, action triggering, and data analysis and interpretation |
Families Citing this family (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7606314B2 (en) * | 2002-08-29 | 2009-10-20 | Raritan America, Inc. | Method and apparatus for caching, compressing and transmitting video signals |
US7818480B2 (en) * | 2002-08-29 | 2010-10-19 | Raritan Americas, Inc. | Wireless management of remote devices |
US7684483B2 (en) * | 2002-08-29 | 2010-03-23 | Raritan Americas, Inc. | Method and apparatus for digitizing and compressing remote video signals |
US20040044822A1 (en) * | 2002-09-03 | 2004-03-04 | Heng-Chien Chen | Computer I/O switching means based on network links |
JP4601895B2 (en) * | 2002-09-26 | 2010-12-22 | 富士通コンポーネント株式会社 | Switching device and computer system |
US20040093391A1 (en) * | 2002-11-07 | 2004-05-13 | Heng-Chien Chen | Computer console for wirelessly controlling remote computers |
JP4246528B2 (en) * | 2003-03-26 | 2009-04-02 | 富士通コンポーネント株式会社 | Selector |
US20070036442A1 (en) * | 2003-04-11 | 2007-02-15 | Stoffer Jay H | Adaptive subtraction image compression |
US20050052465A1 (en) * | 2003-07-03 | 2005-03-10 | Moore Richard L. | Wireless keyboard, video, mouse device |
US7853740B2 (en) * | 2003-09-18 | 2010-12-14 | Riip, Inc. | Keyboard video mouse (KVM) switch for transmission of high quality audio with 64-bit data packets wherein transmissions of data packets are wherein a defined time limit |
US8255804B2 (en) | 2003-09-22 | 2012-08-28 | Broadcom Corporation | Resource controlled user interface resource management |
US8271880B2 (en) * | 2003-09-22 | 2012-09-18 | Broadcom Corporation | Central system based user interface resource management |
US7246183B2 (en) * | 2003-11-14 | 2007-07-17 | Avocent California Corporation | Phase optimization for wireless KVM transmission |
US7475322B2 (en) * | 2003-11-14 | 2009-01-06 | Avocent Huntsville Corporation | Wireless broadcast protocol |
US8176155B2 (en) * | 2003-11-26 | 2012-05-08 | Riip, Inc. | Remote network management system |
US20050202388A1 (en) * | 2004-03-11 | 2005-09-15 | Zuhl Michael A. | Method and apparatus for remote interaction with a computer over a network |
US20050204015A1 (en) * | 2004-03-11 | 2005-09-15 | Steinhart Jonathan E. | Method and apparatus for generation and transmission of computer graphics data |
US7853663B2 (en) | 2004-03-12 | 2010-12-14 | Riip, Inc. | Wireless management system for control of remote devices |
US20070195883A1 (en) * | 2004-03-19 | 2007-08-23 | Koninklijke Philips Electronics, N.V. | Media signal processing method, corresponding system, and application thereof in a resource-scalable motion estimator |
US7403204B2 (en) * | 2004-08-23 | 2008-07-22 | Hewlett-Packard Development Company, L.P. | Method and apparatus for managing changes in a virtual screen buffer |
US7982757B2 (en) * | 2005-04-01 | 2011-07-19 | Digital Multitools Inc. | Method for reducing noise and jitter effects in KVM systems |
US8478884B2 (en) * | 2005-09-30 | 2013-07-02 | Riip, Inc. | Wireless remote device management utilizing mesh topology |
US20060053212A1 (en) * | 2005-10-28 | 2006-03-09 | Aspeed Technology Inc. | Computer network architecture for providing display data at remote monitor |
US7716551B2 (en) * | 2005-12-07 | 2010-05-11 | Microsoft Corporation | Feedback and frame synchronization between media encoders and decoders |
US7668382B2 (en) * | 2006-02-24 | 2010-02-23 | Microsoft Corporation | Block-based fast image compression |
CN100464585C (en) * | 2006-05-16 | 2009-02-25 | 华为技术有限公司 | Video-frequency compression method |
US7920717B2 (en) * | 2007-02-20 | 2011-04-05 | Microsoft Corporation | Pixel extraction and replacement |
US8136042B2 (en) * | 2007-05-11 | 2012-03-13 | Raritan Americas, Inc. | Local port browser interface |
US8300699B2 (en) * | 2007-05-31 | 2012-10-30 | Qualcomm Incorporated | System, method, and computer-readable medium for reducing required throughput in an ultra-wideband system |
JP4609458B2 (en) * | 2007-06-25 | 2011-01-12 | セイコーエプソン株式会社 | Projector and image processing apparatus |
US7903873B2 (en) * | 2007-09-13 | 2011-03-08 | Microsoft Corporation | Textual image coding |
WO2009047694A1 (en) * | 2007-10-08 | 2009-04-16 | Nxp B.V. | Method and system for managing the encoding of digital video content |
DE102007048579B4 (en) * | 2007-10-10 | 2016-05-19 | Airbus Operations Gmbh | Multipurpose flight attendant panel |
US20090177901A1 (en) * | 2008-01-08 | 2009-07-09 | Aten International Co., Ltd. | Kvm management system capable of controlling computer power |
US8248387B1 (en) * | 2008-02-12 | 2012-08-21 | Microsoft Corporation | Efficient buffering of data frames for multiple clients |
US20090210817A1 (en) * | 2008-02-15 | 2009-08-20 | Microsoft Corporation | Mechanism for increasing remote desktop responsiveness |
US20090249214A1 (en) * | 2008-03-31 | 2009-10-01 | Best Steven F | Providing a Shared Buffer Between Multiple Computer Terminals |
US8024502B2 (en) * | 2008-04-18 | 2011-09-20 | Aten International Co., Ltd. | KVM extender system and local, remote modules thereof |
FR2932047B1 (en) * | 2008-05-29 | 2010-08-13 | Airbus France | COMPUTER SYSTEM FOR MAINTENANCE OF A REMOTE TERMINAL AIRCRAFT |
US8200896B2 (en) * | 2008-06-06 | 2012-06-12 | Microsoft Corporation | Increasing remote desktop performance with video caching |
JP4827950B2 (en) * | 2008-07-31 | 2011-11-30 | 富士通株式会社 | Server device |
US20100083122A1 (en) * | 2008-10-01 | 2010-04-01 | International Business Machines Corporation | Systems, methods and computer products for controlling multiple machines using a seamless user-interface to a multi-display |
TW201026056A (en) * | 2008-12-16 | 2010-07-01 | Quanta Comp Inc | Image capturing device and image delivery method |
US9253505B2 (en) | 2009-04-08 | 2016-02-02 | Newrow, Inc. | System and method for image compression |
US8473651B1 (en) * | 2009-04-29 | 2013-06-25 | Clisertec Corporation | Isolated protected access device |
JP5318699B2 (en) * | 2009-08-17 | 2013-10-16 | 富士通コンポーネント株式会社 | KVM switch, KVM system and program |
US8510275B2 (en) * | 2009-09-21 | 2013-08-13 | Dell Products L.P. | File aware block level deduplication |
AU2011100376A4 (en) | 2010-11-18 | 2011-05-12 | Zensar Technologies Ltd | System and Method for Delta Change Synchronization |
US9386297B2 (en) * | 2012-02-24 | 2016-07-05 | Casio Computer Co., Ltd. | Image generating apparatus generating reconstructed image, method, and computer-readable recording medium |
KR101966921B1 (en) * | 2012-09-12 | 2019-08-27 | 삼성전자주식회사 | Method and Apparatus of managing muti-session |
US9313602B2 (en) | 2012-10-24 | 2016-04-12 | Beta Brain, Inc. | Remotely accessing a computer system |
WO2016007512A1 (en) * | 2014-07-07 | 2016-01-14 | Newrow, Inc. | System and method for image compression |
GB2528870A (en) * | 2014-07-31 | 2016-02-10 | Displaylink Uk Ltd | Managing display data for display |
KR102317091B1 (en) | 2014-12-12 | 2021-10-25 | 삼성전자주식회사 | Apparatus and method for processing image |
EP3185555B1 (en) * | 2015-12-23 | 2023-07-05 | Université de Genève | Image compression method with negligible and quantifiable information loss and high compression ratio |
US10540308B2 (en) * | 2016-05-23 | 2020-01-21 | Dell Products, Lp | System and method for providing a remote keyboard/video/mouse in a headless server |
CN108965806B (en) * | 2018-07-12 | 2021-01-08 | 江门市金佣网有限公司 | Data transmission method and device based on remote exhibition and marketing system |
US10824501B2 (en) * | 2019-01-07 | 2020-11-03 | Mellanox Technologies, Ltd. | Computer code integrity checking |
CN113038274B (en) * | 2019-12-24 | 2023-08-29 | 瑞昱半导体股份有限公司 | Video interface conversion device and method |
US11741232B2 (en) | 2021-02-01 | 2023-08-29 | Mellanox Technologies, Ltd. | Secure in-service firmware update |
CN113709490A (en) * | 2021-07-30 | 2021-11-26 | 山东云海国创云计算装备产业创新中心有限公司 | Video compression method, device, system and medium |
Citations (88)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4698672A (en) * | 1986-10-27 | 1987-10-06 | Compression Labs, Inc. | Coding system for reducing redundancy |
US4771865A (en) * | 1986-07-07 | 1988-09-20 | Inventio Ag | System for the remote management of elevator installations |
US5008747A (en) * | 1987-10-19 | 1991-04-16 | British Telecommunications Public Limited Company | Signal coding |
US5483634A (en) * | 1992-05-19 | 1996-01-09 | Canon Kabushiki Kaisha | Display control apparatus and method utilizing first and second image planes |
US5552832A (en) * | 1994-10-26 | 1996-09-03 | Intel Corporation | Run-length encoding sequence for video signals |
US5576845A (en) * | 1993-05-19 | 1996-11-19 | Ricoh Co., Ltd. | Encoding apparatus and method for processing color data in blocks |
US5721842A (en) * | 1995-08-25 | 1998-02-24 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
US5732212A (en) * | 1992-10-23 | 1998-03-24 | Fox Network Systems, Inc. | System and method for remote monitoring and operation of personal computers |
US5742274A (en) * | 1995-10-02 | 1998-04-21 | Pixelvision Inc. | Video interface system utilizing reduced frequency video signal processing |
US5757424A (en) * | 1995-12-19 | 1998-05-26 | Xerox Corporation | High-resolution video conferencing system |
US5767897A (en) * | 1994-10-31 | 1998-06-16 | Picturetel Corporation | Video conferencing system |
US5802213A (en) * | 1994-10-18 | 1998-09-01 | Intel Corporation | Encoding video signals using local quantization levels |
US5821986A (en) * | 1994-11-03 | 1998-10-13 | Picturetel Corporation | Method and apparatus for visual communications in a scalable network environment |
US5861960A (en) * | 1993-09-21 | 1999-01-19 | Fuji Xerox Co., Ltd. | Image signal encoding apparatus |
US6016166A (en) * | 1998-08-31 | 2000-01-18 | Lucent Technologies Inc. | Method and apparatus for adaptive synchronization of digital video and audio playback in a multimedia playback system |
US6091857A (en) * | 1991-04-17 | 2000-07-18 | Shaw; Venson M. | System for producing a quantized signal |
US6167432A (en) * | 1996-02-29 | 2000-12-26 | Webex Communications, Inc., | Method for creating peer-to-peer connections over an interconnected network to facilitate conferencing among users |
US6173082B1 (en) * | 1997-03-28 | 2001-01-09 | Canon Kabushiki Kaisha | Image processing apparatus and method for performing image processes according to image change and storing medium storing therein image processing programs |
US6252884B1 (en) * | 1998-03-20 | 2001-06-26 | Ncr Corporation | Dynamic configuration of wireless networks |
US6263365B1 (en) * | 1996-10-04 | 2001-07-17 | Raindance Communications, Inc. | Browser controller |
US6289378B1 (en) * | 1998-10-20 | 2001-09-11 | Triactive Technologies, L.L.C. | Web browser remote computer management system |
US6304895B1 (en) * | 1997-08-22 | 2001-10-16 | Apex Inc. | Method and system for intelligently controlling a remotely located computer |
US6330595B1 (en) * | 1996-03-08 | 2001-12-11 | Actv, Inc. | Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments |
US6333750B1 (en) * | 1997-03-12 | 2001-12-25 | Cybex Computer Products Corporation | Multi-sourced video distribution hub |
US6343313B1 (en) * | 1996-03-26 | 2002-01-29 | Pixion, Inc. | Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability |
US20020018124A1 (en) * | 2000-07-26 | 2002-02-14 | Mottur Peter A. | Methods and systems for networked camera control |
US6363062B1 (en) * | 1999-06-08 | 2002-03-26 | Caly Corporation | Communications protocol for packet data particularly in mesh topology wireless networks |
US6373850B1 (en) * | 1997-07-10 | 2002-04-16 | Bull S.A. | Stand alone routing switch and videoconferencing system using the stand alone routing switch |
US6378014B1 (en) * | 1999-08-25 | 2002-04-23 | Apex Inc. | Terminal emulator for interfacing between a communications port and a KVM switch |
US6388658B1 (en) * | 1999-05-26 | 2002-05-14 | Cybex Computer Products Corp. | High-end KVM switching system |
US6408334B1 (en) * | 1999-01-13 | 2002-06-18 | Dell Usa, L.P. | Communications system for multiple computer system management circuits |
US20020095594A1 (en) * | 2001-01-16 | 2002-07-18 | Harris Corporation | Secure wireless LAN device including tamper resistant feature and associated method |
US6445818B1 (en) * | 1998-05-28 | 2002-09-03 | Lg Electronics Inc. | Automatically determining an optimal content image search algorithm by choosing the algorithm based on color |
US20020128041A1 (en) * | 2001-03-09 | 2002-09-12 | Parry Travis J. | Methods and systems for controlling multiple computing devices |
US20020147840A1 (en) * | 2001-04-05 | 2002-10-10 | Mutton James Andrew | Distributed link processing system for delivering application and multi-media content on the internet |
US20020188709A1 (en) * | 2001-05-04 | 2002-12-12 | Rlx Technologies, Inc. | Console information server system and method |
US20030017826A1 (en) * | 2001-07-17 | 2003-01-23 | Dan Fishman | Short-range wireless architecture |
US20030030660A1 (en) * | 2001-08-08 | 2003-02-13 | Dischert Lee R. | Method and apparatus for remote use of personal computer |
US20030037130A1 (en) * | 2001-08-16 | 2003-02-20 | Doug Rollins | Method and system for accessing computer systems in a computer network |
US6532218B1 (en) * | 1999-04-05 | 2003-03-11 | Siemens Information & Communication Networks, Inc. | System and method for multimedia collaborative conferencing |
US6535983B1 (en) * | 1999-11-08 | 2003-03-18 | 3Com Corporation | System and method for signaling and detecting request for power over ethernet |
US20030088655A1 (en) * | 2001-11-02 | 2003-05-08 | Leigh Kevin B. | Remote management system for multiple servers |
US6564380B1 (en) * | 1999-01-26 | 2003-05-13 | Pixelworld Networks, Inc. | System and method for sending live video on the internet |
US20030092437A1 (en) * | 2001-11-13 | 2003-05-15 | Nowlin Dan H. | Method for switching the use of a shared set of wireless I/O devices between multiple computers |
US6567813B1 (en) * | 2000-12-29 | 2003-05-20 | Webex Communications, Inc. | Quality of service maintenance for distributed collaborative computing |
US6571016B1 (en) * | 1997-05-05 | 2003-05-27 | Microsoft Corporation | Intra compression of pixel blocks using predicted mean |
US20030112467A1 (en) * | 2001-12-17 | 2003-06-19 | Mccollum Tim | Apparatus and method for multimedia navigation |
US6622018B1 (en) * | 2000-04-24 | 2003-09-16 | 3Com Corporation | Portable device control console with wireless connection |
US6621413B1 (en) * | 2000-08-16 | 2003-09-16 | Ge Medical Systems Global Technology Company, Llc | Wireless monitoring of a mobile magnet |
US20030217123A1 (en) * | 1998-09-22 | 2003-11-20 | Anderson Robin L. | System and method for accessing and operating personal computers remotely |
US6664969B1 (en) * | 1999-11-12 | 2003-12-16 | Hewlett-Packard Development Company, L.P. | Operating system independent method and apparatus for graphical remote access |
US6675174B1 (en) * | 2000-02-02 | 2004-01-06 | International Business Machines Corp. | System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams |
US6681250B1 (en) * | 2000-05-03 | 2004-01-20 | Avocent Corporation | Network based KVM switching system |
US20040015980A1 (en) * | 2002-07-17 | 2004-01-22 | Sarah Rowen | Systems and methods for monitoring and controlling multiple computers |
US20040045030A1 (en) * | 2001-09-26 | 2004-03-04 | Reynolds Jodie Lynn | System and method for communicating media signals |
US20040042547A1 (en) * | 2002-08-29 | 2004-03-04 | Scott Coleman | Method and apparatus for digitizing and compressing remote video signals |
US20040062305A1 (en) * | 2002-10-01 | 2004-04-01 | Dambrackas William A. | Video compression system |
US6728753B1 (en) * | 1999-06-15 | 2004-04-27 | Microsoft Corporation | Presentation broadcasting |
US20040093401A1 (en) * | 2002-11-13 | 2004-05-13 | International Business Machines Corporation | Client-server text messaging monitoring for remote computer management |
US20040117426A1 (en) * | 2001-04-19 | 2004-06-17 | Steven Rudkin | Communications network |
US6772169B2 (en) * | 2000-11-09 | 2004-08-03 | Expand Beyond Corporation | System, method and apparatus for the wireless monitoring and management of computer systems |
US6771213B2 (en) * | 1999-06-18 | 2004-08-03 | Jennifer Durst | Object locator |
US20040249953A1 (en) * | 2003-05-14 | 2004-12-09 | Microsoft Corporation | Peer-to-peer instant messaging |
US20050018766A1 (en) * | 2003-07-21 | 2005-01-27 | Sony Corporation And Sony Electronics, Inc. | Power-line communication based surveillance system |
US6850502B1 (en) * | 2000-10-30 | 2005-02-01 | Radiant Networks, Plc | Join process method for admitting a node to a wireless mesh network |
US20050027890A1 (en) * | 2003-04-03 | 2005-02-03 | Nelson Matt S. | Wireless computer system |
US20050030377A1 (en) * | 2003-04-07 | 2005-02-10 | Shaolin Li | Monitoring system using multi-antenna transceivers |
US20050094577A1 (en) * | 2003-10-29 | 2005-05-05 | Peter Ashwood-Smith | Virtual private networks within a packet network having a mesh topology |
US20050104852A1 (en) * | 2003-11-18 | 2005-05-19 | Emerson Theodore F. | Generating pointer position data from position data of a pointing device of a remote console |
US20050114894A1 (en) * | 2003-11-26 | 2005-05-26 | David Hoerl | System for video digitization and image correction for use with a computer management system |
US20050125519A1 (en) * | 2003-11-26 | 2005-06-09 | Allen Yang | Remote network management system |
US20050132403A1 (en) * | 2003-12-12 | 2005-06-16 | Alex Lee | Option menu for use with a computer management system |
US20050195775A1 (en) * | 2004-03-03 | 2005-09-08 | Petite Thomas D. | System and method for monitoring remote devices with a dual-mode wireless communication protocol |
US20050204082A1 (en) * | 2001-03-29 | 2005-09-15 | Avocent Corporation | Computer interface module |
US6952495B1 (en) * | 1999-11-19 | 2005-10-04 | Lg Electronics Inc. | Method for quantization of histogram bin value of image |
US7024474B2 (en) * | 2000-01-31 | 2006-04-04 | Telecommunication Systems, Inc. | System and method to publish information from servers to remote monitor devices |
US20060083205A1 (en) * | 2004-10-14 | 2006-04-20 | Buddhikot Milind M | Method and system for wireless networking using coordinated dynamic spectrum access |
US20060095539A1 (en) * | 2004-10-29 | 2006-05-04 | Martin Renkis | Wireless video surveillance system and method for mesh networking |
US7042587B2 (en) * | 2001-11-28 | 2006-05-09 | Hewlett-Packard Development Company, L.P. | Image data caching |
US7099934B1 (en) * | 1996-07-23 | 2006-08-29 | Ewing Carrel W | Network-connecting power manager for remote appliances |
US7117266B2 (en) * | 2001-07-17 | 2006-10-03 | Bea Systems, Inc. | Method for providing user-apparent consistency in a wireless device |
US7127619B2 (en) * | 2001-06-06 | 2006-10-24 | Sony Corporation | Decoding and decryption of partially encrypted information |
US7206940B2 (en) * | 2002-06-24 | 2007-04-17 | Microsoft Corporation | Methods and systems providing per pixel security and functionality |
US7249167B1 (en) * | 2000-11-09 | 2007-07-24 | Raritan, Inc. | Intelligent modular server management system for selectively operating a plurality of computers |
US7260624B2 (en) * | 2002-09-20 | 2007-08-21 | American Megatrends, Inc. | Systems and methods for establishing interaction between a local computer and a remote computer |
US7342895B2 (en) * | 2004-01-30 | 2008-03-11 | Mark Serpa | Method and system for peer-to-peer wireless communication over unlicensed communication spectrum |
US7382397B2 (en) * | 2000-07-26 | 2008-06-03 | Smiths Detection, Inc. | Systems and methods for controlling devices over a network |
US7502884B1 (en) * | 2004-07-22 | 2009-03-10 | Xsigo Systems | Resource virtualization switch |
-
2002
- 2002-08-29 US US10/233,299 patent/US7684483B2/en active Active
-
2010
- 2010-03-22 US US12/728,998 patent/US20100225658A1/en not_active Abandoned
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4771865A (en) * | 1986-07-07 | 1988-09-20 | Inventio Ag | System for the remote management of elevator installations |
US4698672A (en) * | 1986-10-27 | 1987-10-06 | Compression Labs, Inc. | Coding system for reducing redundancy |
US5008747A (en) * | 1987-10-19 | 1991-04-16 | British Telecommunications Public Limited Company | Signal coding |
US6091857A (en) * | 1991-04-17 | 2000-07-18 | Shaw; Venson M. | System for producing a quantized signal |
US5483634A (en) * | 1992-05-19 | 1996-01-09 | Canon Kabushiki Kaisha | Display control apparatus and method utilizing first and second image planes |
US5732212A (en) * | 1992-10-23 | 1998-03-24 | Fox Network Systems, Inc. | System and method for remote monitoring and operation of personal computers |
US5576845A (en) * | 1993-05-19 | 1996-11-19 | Ricoh Co., Ltd. | Encoding apparatus and method for processing color data in blocks |
US5861960A (en) * | 1993-09-21 | 1999-01-19 | Fuji Xerox Co., Ltd. | Image signal encoding apparatus |
US5802213A (en) * | 1994-10-18 | 1998-09-01 | Intel Corporation | Encoding video signals using local quantization levels |
US5552832A (en) * | 1994-10-26 | 1996-09-03 | Intel Corporation | Run-length encoding sequence for video signals |
US5767897A (en) * | 1994-10-31 | 1998-06-16 | Picturetel Corporation | Video conferencing system |
US5821986A (en) * | 1994-11-03 | 1998-10-13 | Picturetel Corporation | Method and apparatus for visual communications in a scalable network environment |
US6112264A (en) * | 1995-08-25 | 2000-08-29 | Apex Pc Solutions Inc. | Computer interconnection system having analog overlay for remote control of the interconnection switch |
US5884096A (en) * | 1995-08-25 | 1999-03-16 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
US5937176A (en) * | 1995-08-25 | 1999-08-10 | Apex Pc Solutions, Inc. | Interconnection system having circuits to packetize keyboard/mouse electronic signals from plural workstations and supply to keyboard/mouse input of remote computer systems through a crosspoint switch |
US6345323B1 (en) * | 1995-08-25 | 2002-02-05 | Apex, Inc. | Computer interconnection system |
US5721842A (en) * | 1995-08-25 | 1998-02-24 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
US5742274A (en) * | 1995-10-02 | 1998-04-21 | Pixelvision Inc. | Video interface system utilizing reduced frequency video signal processing |
US5757424A (en) * | 1995-12-19 | 1998-05-26 | Xerox Corporation | High-resolution video conferencing system |
US6167432A (en) * | 1996-02-29 | 2000-12-26 | Webex Communications, Inc., | Method for creating peer-to-peer connections over an interconnected network to facilitate conferencing among users |
US6330595B1 (en) * | 1996-03-08 | 2001-12-11 | Actv, Inc. | Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments |
US6343313B1 (en) * | 1996-03-26 | 2002-01-29 | Pixion, Inc. | Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability |
US7099934B1 (en) * | 1996-07-23 | 2006-08-29 | Ewing Carrel W | Network-connecting power manager for remote appliances |
US6263365B1 (en) * | 1996-10-04 | 2001-07-17 | Raindance Communications, Inc. | Browser controller |
US6333750B1 (en) * | 1997-03-12 | 2001-12-25 | Cybex Computer Products Corporation | Multi-sourced video distribution hub |
US6173082B1 (en) * | 1997-03-28 | 2001-01-09 | Canon Kabushiki Kaisha | Image processing apparatus and method for performing image processes according to image change and storing medium storing therein image processing programs |
US6571016B1 (en) * | 1997-05-05 | 2003-05-27 | Microsoft Corporation | Intra compression of pixel blocks using predicted mean |
US6373850B1 (en) * | 1997-07-10 | 2002-04-16 | Bull S.A. | Stand alone routing switch and videoconferencing system using the stand alone routing switch |
US20020038334A1 (en) * | 1997-08-22 | 2002-03-28 | Schneider Walter J. | Method and system for intelligently controlling a remotely located computer |
US6701380B2 (en) * | 1997-08-22 | 2004-03-02 | Avocent Redmond Corp. | Method and system for intelligently controlling a remotely located computer |
US6304895B1 (en) * | 1997-08-22 | 2001-10-16 | Apex Inc. | Method and system for intelligently controlling a remotely located computer |
US20030135656A1 (en) * | 1997-08-22 | 2003-07-17 | Apex Inc. | Method and system for intellegently controlling a remotely located computer |
US6539418B2 (en) * | 1997-08-22 | 2003-03-25 | Apex Inc. | Method and system for intelligently controlling a remotely located computer |
US6252884B1 (en) * | 1998-03-20 | 2001-06-26 | Ncr Corporation | Dynamic configuration of wireless networks |
US6445818B1 (en) * | 1998-05-28 | 2002-09-03 | Lg Electronics Inc. | Automatically determining an optimal content image search algorithm by choosing the algorithm based on color |
US6016166A (en) * | 1998-08-31 | 2000-01-18 | Lucent Technologies Inc. | Method and apparatus for adaptive synchronization of digital video and audio playback in a multimedia playback system |
US20030217123A1 (en) * | 1998-09-22 | 2003-11-20 | Anderson Robin L. | System and method for accessing and operating personal computers remotely |
US6289378B1 (en) * | 1998-10-20 | 2001-09-11 | Triactive Technologies, L.L.C. | Web browser remote computer management system |
US6408334B1 (en) * | 1999-01-13 | 2002-06-18 | Dell Usa, L.P. | Communications system for multiple computer system management circuits |
US6564380B1 (en) * | 1999-01-26 | 2003-05-13 | Pixelworld Networks, Inc. | System and method for sending live video on the internet |
US6532218B1 (en) * | 1999-04-05 | 2003-03-11 | Siemens Information & Communication Networks, Inc. | System and method for multimedia collaborative conferencing |
US6388658B1 (en) * | 1999-05-26 | 2002-05-14 | Cybex Computer Products Corp. | High-end KVM switching system |
US6363062B1 (en) * | 1999-06-08 | 2002-03-26 | Caly Corporation | Communications protocol for packet data particularly in mesh topology wireless networks |
US6728753B1 (en) * | 1999-06-15 | 2004-04-27 | Microsoft Corporation | Presentation broadcasting |
US6771213B2 (en) * | 1999-06-18 | 2004-08-03 | Jennifer Durst | Object locator |
US20030191878A1 (en) * | 1999-08-25 | 2003-10-09 | Avocent Redmond Corporation | KVM switch including a terminal emulator |
US6378014B1 (en) * | 1999-08-25 | 2002-04-23 | Apex Inc. | Terminal emulator for interfacing between a communications port and a KVM switch |
US6567869B2 (en) * | 1999-08-25 | 2003-05-20 | Apex Inc. | KVM switch including a terminal emulator |
US6535983B1 (en) * | 1999-11-08 | 2003-03-18 | 3Com Corporation | System and method for signaling and detecting request for power over ethernet |
US6664969B1 (en) * | 1999-11-12 | 2003-12-16 | Hewlett-Packard Development Company, L.P. | Operating system independent method and apparatus for graphical remote access |
US6952495B1 (en) * | 1999-11-19 | 2005-10-04 | Lg Electronics Inc. | Method for quantization of histogram bin value of image |
US7024474B2 (en) * | 2000-01-31 | 2006-04-04 | Telecommunication Systems, Inc. | System and method to publish information from servers to remote monitor devices |
US6675174B1 (en) * | 2000-02-02 | 2004-01-06 | International Business Machines Corp. | System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams |
US6622018B1 (en) * | 2000-04-24 | 2003-09-16 | 3Com Corporation | Portable device control console with wireless connection |
US20050044184A1 (en) * | 2000-05-03 | 2005-02-24 | Thomas Christopher L. | Network based KVM switching system |
US6681250B1 (en) * | 2000-05-03 | 2004-01-20 | Avocent Corporation | Network based KVM switching system |
US20020018124A1 (en) * | 2000-07-26 | 2002-02-14 | Mottur Peter A. | Methods and systems for networked camera control |
US7382397B2 (en) * | 2000-07-26 | 2008-06-03 | Smiths Detection, Inc. | Systems and methods for controlling devices over a network |
US6621413B1 (en) * | 2000-08-16 | 2003-09-16 | Ge Medical Systems Global Technology Company, Llc | Wireless monitoring of a mobile magnet |
US6850502B1 (en) * | 2000-10-30 | 2005-02-01 | Radiant Networks, Plc | Join process method for admitting a node to a wireless mesh network |
US7249167B1 (en) * | 2000-11-09 | 2007-07-24 | Raritan, Inc. | Intelligent modular server management system for selectively operating a plurality of computers |
US6772169B2 (en) * | 2000-11-09 | 2004-08-03 | Expand Beyond Corporation | System, method and apparatus for the wireless monitoring and management of computer systems |
US6567813B1 (en) * | 2000-12-29 | 2003-05-20 | Webex Communications, Inc. | Quality of service maintenance for distributed collaborative computing |
US20020095594A1 (en) * | 2001-01-16 | 2002-07-18 | Harris Corporation | Secure wireless LAN device including tamper resistant feature and associated method |
US20020128041A1 (en) * | 2001-03-09 | 2002-09-12 | Parry Travis J. | Methods and systems for controlling multiple computing devices |
US20050204082A1 (en) * | 2001-03-29 | 2005-09-15 | Avocent Corporation | Computer interface module |
US20020147840A1 (en) * | 2001-04-05 | 2002-10-10 | Mutton James Andrew | Distributed link processing system for delivering application and multi-media content on the internet |
US20040117426A1 (en) * | 2001-04-19 | 2004-06-17 | Steven Rudkin | Communications network |
US20020188709A1 (en) * | 2001-05-04 | 2002-12-12 | Rlx Technologies, Inc. | Console information server system and method |
US7127619B2 (en) * | 2001-06-06 | 2006-10-24 | Sony Corporation | Decoding and decryption of partially encrypted information |
US20030017826A1 (en) * | 2001-07-17 | 2003-01-23 | Dan Fishman | Short-range wireless architecture |
US7117266B2 (en) * | 2001-07-17 | 2006-10-03 | Bea Systems, Inc. | Method for providing user-apparent consistency in a wireless device |
US20030030660A1 (en) * | 2001-08-08 | 2003-02-13 | Dischert Lee R. | Method and apparatus for remote use of personal computer |
US20030037130A1 (en) * | 2001-08-16 | 2003-02-20 | Doug Rollins | Method and system for accessing computer systems in a computer network |
US20040045030A1 (en) * | 2001-09-26 | 2004-03-04 | Reynolds Jodie Lynn | System and method for communicating media signals |
US20030088655A1 (en) * | 2001-11-02 | 2003-05-08 | Leigh Kevin B. | Remote management system for multiple servers |
US20030092437A1 (en) * | 2001-11-13 | 2003-05-15 | Nowlin Dan H. | Method for switching the use of a shared set of wireless I/O devices between multiple computers |
US7042587B2 (en) * | 2001-11-28 | 2006-05-09 | Hewlett-Packard Development Company, L.P. | Image data caching |
US20030112467A1 (en) * | 2001-12-17 | 2003-06-19 | Mccollum Tim | Apparatus and method for multimedia navigation |
US7206940B2 (en) * | 2002-06-24 | 2007-04-17 | Microsoft Corporation | Methods and systems providing per pixel security and functionality |
US20040015980A1 (en) * | 2002-07-17 | 2004-01-22 | Sarah Rowen | Systems and methods for monitoring and controlling multiple computers |
US20040042547A1 (en) * | 2002-08-29 | 2004-03-04 | Scott Coleman | Method and apparatus for digitizing and compressing remote video signals |
US7260624B2 (en) * | 2002-09-20 | 2007-08-21 | American Megatrends, Inc. | Systems and methods for establishing interaction between a local computer and a remote computer |
US20040062305A1 (en) * | 2002-10-01 | 2004-04-01 | Dambrackas William A. | Video compression system |
US20040093401A1 (en) * | 2002-11-13 | 2004-05-13 | International Business Machines Corporation | Client-server text messaging monitoring for remote computer management |
US20050027890A1 (en) * | 2003-04-03 | 2005-02-03 | Nelson Matt S. | Wireless computer system |
US20050030377A1 (en) * | 2003-04-07 | 2005-02-10 | Shaolin Li | Monitoring system using multi-antenna transceivers |
US20040249953A1 (en) * | 2003-05-14 | 2004-12-09 | Microsoft Corporation | Peer-to-peer instant messaging |
US20050018766A1 (en) * | 2003-07-21 | 2005-01-27 | Sony Corporation And Sony Electronics, Inc. | Power-line communication based surveillance system |
US20050094577A1 (en) * | 2003-10-29 | 2005-05-05 | Peter Ashwood-Smith | Virtual private networks within a packet network having a mesh topology |
US20050104852A1 (en) * | 2003-11-18 | 2005-05-19 | Emerson Theodore F. | Generating pointer position data from position data of a pointing device of a remote console |
US20050114894A1 (en) * | 2003-11-26 | 2005-05-26 | David Hoerl | System for video digitization and image correction for use with a computer management system |
US20050125519A1 (en) * | 2003-11-26 | 2005-06-09 | Allen Yang | Remote network management system |
US20050132403A1 (en) * | 2003-12-12 | 2005-06-16 | Alex Lee | Option menu for use with a computer management system |
US7342895B2 (en) * | 2004-01-30 | 2008-03-11 | Mark Serpa | Method and system for peer-to-peer wireless communication over unlicensed communication spectrum |
US20050195775A1 (en) * | 2004-03-03 | 2005-09-08 | Petite Thomas D. | System and method for monitoring remote devices with a dual-mode wireless communication protocol |
US7502884B1 (en) * | 2004-07-22 | 2009-03-10 | Xsigo Systems | Resource virtualization switch |
US20060083205A1 (en) * | 2004-10-14 | 2006-04-20 | Buddhikot Milind M | Method and system for wireless networking using coordinated dynamic spectrum access |
US20060095539A1 (en) * | 2004-10-29 | 2006-05-04 | Martin Renkis | Wireless video surveillance system and method for mesh networking |
Cited By (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8558795B2 (en) * | 2004-03-12 | 2013-10-15 | Riip, Inc. | Switchless KVM network with wireless technology |
US20050204026A1 (en) * | 2004-03-12 | 2005-09-15 | David Hoerl | Switchless KVM network with wireless technology |
US9363248B1 (en) | 2005-08-12 | 2016-06-07 | Silver Peak Systems, Inc. | Data encryption in a network memory architecture for providing data based on local accessibility |
US10091172B1 (en) | 2005-08-12 | 2018-10-02 | Silver Peak Systems, Inc. | Data encryption in a network memory architecture for providing data based on local accessibility |
US9363309B2 (en) | 2005-09-29 | 2016-06-07 | Silver Peak Systems, Inc. | Systems and methods for compressing packet data by predicting subsequent data |
US9036662B1 (en) * | 2005-09-29 | 2015-05-19 | Silver Peak Systems, Inc. | Compressing packet data |
US9549048B1 (en) * | 2005-09-29 | 2017-01-17 | Silver Peak Systems, Inc. | Transferring compressed packet data over a network |
US9712463B1 (en) | 2005-09-29 | 2017-07-18 | Silver Peak Systems, Inc. | Workload optimization in a wide area network utilizing virtual switches |
US9584403B2 (en) | 2006-08-02 | 2017-02-28 | Silver Peak Systems, Inc. | Communications scheduler |
US9438538B2 (en) | 2006-08-02 | 2016-09-06 | Silver Peak Systems, Inc. | Data matching using flow based packet data storage |
US9961010B2 (en) | 2006-08-02 | 2018-05-01 | Silver Peak Systems, Inc. | Communications scheduler |
US9191342B2 (en) | 2006-08-02 | 2015-11-17 | Silver Peak Systems, Inc. | Data matching using flow based packet data storage |
US9152574B2 (en) | 2007-07-05 | 2015-10-06 | Silver Peak Systems, Inc. | Identification of non-sequential data stored in memory |
US9253277B2 (en) | 2007-07-05 | 2016-02-02 | Silver Peak Systems, Inc. | Pre-fetching stored data from a memory |
US9092342B2 (en) | 2007-07-05 | 2015-07-28 | Silver Peak Systems, Inc. | Pre-fetching data into a memory |
US9613071B1 (en) | 2007-11-30 | 2017-04-04 | Silver Peak Systems, Inc. | Deferred data storage |
US20090238284A1 (en) * | 2008-03-18 | 2009-09-24 | Auratechnic, Inc. | Reducing Differentials In Visual Media |
US8295359B2 (en) * | 2008-03-18 | 2012-10-23 | Auratechnic, Inc. | Reducing differentials in visual media |
US9717021B2 (en) | 2008-07-03 | 2017-07-25 | Silver Peak Systems, Inc. | Virtual network overlay |
US9397951B1 (en) | 2008-07-03 | 2016-07-19 | Silver Peak Systems, Inc. | Quality of service using multiple flows |
US10313930B2 (en) | 2008-07-03 | 2019-06-04 | Silver Peak Systems, Inc. | Virtual wide area network overlays |
US11419011B2 (en) | 2008-07-03 | 2022-08-16 | Hewlett Packard Enterprise Development Lp | Data transmission via bonded tunnels of a virtual wide area network overlay with error correction |
US11412416B2 (en) | 2008-07-03 | 2022-08-09 | Hewlett Packard Enterprise Development Lp | Data transmission via bonded tunnels of a virtual wide area network overlay |
US9143455B1 (en) | 2008-07-03 | 2015-09-22 | Silver Peak Systems, Inc. | Quality of service using multiple flows |
US10805840B2 (en) | 2008-07-03 | 2020-10-13 | Silver Peak Systems, Inc. | Data transmission via a virtual wide area network overlay |
US9906630B2 (en) | 2011-10-14 | 2018-02-27 | Silver Peak Systems, Inc. | Processing data packets in performance enhancing proxy (PEP) environment |
US9130991B2 (en) | 2011-10-14 | 2015-09-08 | Silver Peak Systems, Inc. | Processing data packets in performance enhancing proxy (PEP) environment |
US9626224B2 (en) | 2011-11-03 | 2017-04-18 | Silver Peak Systems, Inc. | Optimizing available computing resources within a virtual environment |
US20130136173A1 (en) * | 2011-11-15 | 2013-05-30 | Panasonic Corporation | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US10812361B2 (en) | 2014-07-30 | 2020-10-20 | Silver Peak Systems, Inc. | Determining a transit appliance for data traffic to a software service |
US9948496B1 (en) | 2014-07-30 | 2018-04-17 | Silver Peak Systems, Inc. | Determining a transit appliance for data traffic to a software service |
US11374845B2 (en) | 2014-07-30 | 2022-06-28 | Hewlett Packard Enterprise Development Lp | Determining a transit appliance for data traffic to a software service |
US11381493B2 (en) | 2014-07-30 | 2022-07-05 | Hewlett Packard Enterprise Development Lp | Determining a transit appliance for data traffic to a software service |
US9875344B1 (en) | 2014-09-05 | 2018-01-23 | Silver Peak Systems, Inc. | Dynamic monitoring and authorization of an optimization device |
US10885156B2 (en) | 2014-09-05 | 2021-01-05 | Silver Peak Systems, Inc. | Dynamic monitoring and authorization of an optimization device |
US10719588B2 (en) | 2014-09-05 | 2020-07-21 | Silver Peak Systems, Inc. | Dynamic monitoring and authorization of an optimization device |
US11868449B2 (en) | 2014-09-05 | 2024-01-09 | Hewlett Packard Enterprise Development Lp | Dynamic monitoring and authorization of an optimization device |
US11954184B2 (en) | 2014-09-05 | 2024-04-09 | Hewlett Packard Enterprise Development Lp | Dynamic monitoring and authorization of an optimization device |
US11921827B2 (en) | 2014-09-05 | 2024-03-05 | Hewlett Packard Enterprise Development Lp | Dynamic monitoring and authorization of an optimization device |
US11336553B2 (en) | 2015-12-28 | 2022-05-17 | Hewlett Packard Enterprise Development Lp | Dynamic monitoring and visualization for network health characteristics of network device pairs |
US10771370B2 (en) | 2015-12-28 | 2020-09-08 | Silver Peak Systems, Inc. | Dynamic monitoring and visualization for network health characteristics |
US10164861B2 (en) | 2015-12-28 | 2018-12-25 | Silver Peak Systems, Inc. | Dynamic monitoring and visualization for network health characteristics |
US11847040B2 (en) | 2016-03-16 | 2023-12-19 | Asg Technologies Group, Inc. | Systems and methods for detecting data alteration from source to target |
US11086751B2 (en) | 2016-03-16 | 2021-08-10 | Asg Technologies Group, Inc. | Intelligent metadata management and data lineage tracing |
US11757740B2 (en) | 2016-06-13 | 2023-09-12 | Hewlett Packard Enterprise Development Lp | Aggregation of select network traffic statistics |
US10432484B2 (en) | 2016-06-13 | 2019-10-01 | Silver Peak Systems, Inc. | Aggregating select network traffic statistics |
US11601351B2 (en) | 2016-06-13 | 2023-03-07 | Hewlett Packard Enterprise Development Lp | Aggregation of select network traffic statistics |
US11757739B2 (en) | 2016-06-13 | 2023-09-12 | Hewlett Packard Enterprise Development Lp | Aggregation of select network traffic statistics |
US10848268B2 (en) | 2016-08-19 | 2020-11-24 | Silver Peak Systems, Inc. | Forward packet recovery with constrained network overhead |
US10326551B2 (en) | 2016-08-19 | 2019-06-18 | Silver Peak Systems, Inc. | Forward packet recovery with constrained network overhead |
US9967056B1 (en) | 2016-08-19 | 2018-05-08 | Silver Peak Systems, Inc. | Forward packet recovery with constrained overhead |
US11424857B2 (en) | 2016-08-19 | 2022-08-23 | Hewlett Packard Enterprise Development Lp | Forward packet recovery with constrained network overhead |
US10771394B2 (en) | 2017-02-06 | 2020-09-08 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows on a first packet from DNS data |
US11729090B2 (en) | 2017-02-06 | 2023-08-15 | Hewlett Packard Enterprise Development Lp | Multi-level learning for classifying network traffic flows from first packet data |
US11044202B2 (en) | 2017-02-06 | 2021-06-22 | Silver Peak Systems, Inc. | Multi-level learning for predicting and classifying traffic flows from first packet data |
US10892978B2 (en) | 2017-02-06 | 2021-01-12 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows from first packet data |
US10257082B2 (en) | 2017-02-06 | 2019-04-09 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows |
US11582157B2 (en) | 2017-02-06 | 2023-02-14 | Hewlett Packard Enterprise Development Lp | Multi-level learning for classifying traffic flows on a first packet from DNS response data |
US11805045B2 (en) | 2017-09-21 | 2023-10-31 | Hewlett Packard Enterprise Development Lp | Selective routing |
US11212210B2 (en) | 2017-09-21 | 2021-12-28 | Silver Peak Systems, Inc. | Selective route exporting using source type |
WO2019099140A1 (en) * | 2017-11-20 | 2019-05-23 | ASG Technologies Group, Inc. dba ASG Technologies | Publication of applications using server-side virtual screen change capture |
US11582284B2 (en) | 2017-11-20 | 2023-02-14 | Asg Technologies Group, Inc. | Optimization of publication of an application to a web browser |
US11057500B2 (en) | 2017-11-20 | 2021-07-06 | Asg Technologies Group, Inc. | Publication of applications using server-side virtual screen change capture |
US11172042B2 (en) | 2017-12-29 | 2021-11-09 | Asg Technologies Group, Inc. | Platform-independent application publishing to a front-end interface by encapsulating published content in a web container |
US11567750B2 (en) | 2017-12-29 | 2023-01-31 | Asg Technologies Group, Inc. | Web component dynamically deployed in an application and displayed in a workspace product |
US10877740B2 (en) | 2017-12-29 | 2020-12-29 | Asg Technologies Group, Inc. | Dynamically deploying a component in an application |
US10812611B2 (en) | 2017-12-29 | 2020-10-20 | Asg Technologies Group, Inc. | Platform-independent application publishing to a personalized front-end interface by encapsulating published content into a container |
US11611633B2 (en) | 2017-12-29 | 2023-03-21 | Asg Technologies Group, Inc. | Systems and methods for platform-independent application publishing to a front-end interface |
US11405265B2 (en) | 2018-03-12 | 2022-08-02 | Hewlett Packard Enterprise Development Lp | Methods and systems for detecting path break conditions while minimizing network overhead |
US10637721B2 (en) | 2018-03-12 | 2020-04-28 | Silver Peak Systems, Inc. | Detecting path break conditions while minimizing network overhead |
US10887159B2 (en) | 2018-03-12 | 2021-01-05 | Silver Peak Systems, Inc. | Methods and systems for detecting path break conditions while minimizing network overhead |
US11762634B2 (en) | 2019-06-28 | 2023-09-19 | Asg Technologies Group, Inc. | Systems and methods for seamlessly integrating multiple products by using a common visual modeler |
US11886397B2 (en) | 2019-10-18 | 2024-01-30 | Asg Technologies Group, Inc. | Multi-faceted trust system |
US11775666B2 (en) | 2019-10-18 | 2023-10-03 | Asg Technologies Group, Inc. | Federated redaction of select content in documents stored across multiple repositories |
US11550549B2 (en) | 2019-10-18 | 2023-01-10 | Asg Technologies Group, Inc. | Unified digital automation platform combining business process management and robotic process automation |
US11755760B2 (en) | 2019-10-18 | 2023-09-12 | Asg Technologies Group, Inc. | Systems and methods for secure policies-based information governance |
US11693982B2 (en) | 2019-10-18 | 2023-07-04 | Asg Technologies Group, Inc. | Systems for secure enterprise-wide fine-grained role-based access control of organizational assets |
US11269660B2 (en) | 2019-10-18 | 2022-03-08 | Asg Technologies Group, Inc. | Methods and systems for integrated development environment editor support with a single code base |
US11941137B2 (en) | 2019-10-18 | 2024-03-26 | Asg Technologies Group, Inc. | Use of multi-faceted trust scores for decision making, action triggering, and data analysis and interpretation |
US11055067B2 (en) | 2019-10-18 | 2021-07-06 | Asg Technologies Group, Inc. | Unified digital automation platform |
CN111600779A (en) * | 2020-06-24 | 2020-08-28 | 厦门长江电子科技有限公司 | Test platform compatible with various switches |
US11849330B2 (en) | 2020-10-13 | 2023-12-19 | Asg Technologies Group, Inc. | Geolocation-based policy rules |
Also Published As
Publication number | Publication date |
---|---|
US20040042547A1 (en) | 2004-03-04 |
US7684483B2 (en) | 2010-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7684483B2 (en) | Method and apparatus for digitizing and compressing remote video signals | |
US7606314B2 (en) | Method and apparatus for caching, compressing and transmitting video signals | |
US8176155B2 (en) | Remote network management system | |
US8683024B2 (en) | System for video digitization and image correction for use with a computer management system | |
US7738553B2 (en) | Video compression system | |
US7986844B2 (en) | Optimized video compression using hashing function | |
US20030058248A1 (en) | System and method for communicating graphics over a network | |
US20150123902A1 (en) | Method and apparatus for synchronizing virtual and physical mouse pointers on remote kvm systems | |
US20120079522A1 (en) | Method And Apparatus For Transmitting Video Signals | |
US20040215742A1 (en) | Image perfection for virtual presence architecture (VPA) | |
Wang et al. | Research the Compression and Transmission Technology of Medical Image Base on the Remote Consultation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR Free format text: AMENDMENT NO. 1 TO PATENT SECURITY AGREEMENT;ASSIGNORS:RARITAN AMERICAS, INC.;RARITAN, INC.;RIIP, INC.;AND OTHERS;REEL/FRAME:028192/0318 Effective date: 20120430 |
|
AS | Assignment |
Owner name: RARITAN AMERICAS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272 Effective date: 20120907 Owner name: PNC BANK, NATIONAL ASSOCIATION, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNORS:RARITAN, INC.;RARITAN AMERICAS, INC.;RARITAN TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:028924/0527 Effective date: 20120907 Owner name: RARITAN, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272 Effective date: 20120907 Owner name: RIIP, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272 Effective date: 20120907 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: RIIP, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205 Effective date: 20151008 Owner name: RARITAN TECHNOLOGIES, INC.,, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205 Effective date: 20151008 Owner name: RARITAN AMERICAS, INC.,, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205 Effective date: 20151008 Owner name: RARITAN INC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205 Effective date: 20151008 |