Technique aims to protect, possibly improve Internet video
WEST LAFAYETTE, Ind. Researchers at Purdue University are close to perfecting a technique that will make it practical to use "digital watermarking" for video sent over the Internet, providing a reliable way to protect copyrights and to verify a video's authenticity.
Digital watermarking, or steganography, is a procedure in which hidden patterns are embedded into an image or document on the World Wide Web. The patterns can then be used to verify the image as authentic, protecting intellectual property rights for people who create digital media.
However, watermarking is especially difficult to use for Internet video. As video is transmitted over the "noisy," traffic-congested Internet, a number of frames never make it to the receiving end. The frames, including those that may contain digital watermarks, are lost, said Edward Delp, a professor of electrical and computer engineering at Purdue.
The Purdue research concerns a technique that "resynchronizes" video at the receiving end of the transmission, preventing the frames from being lost.
The findings were detailed in a paper presented during an international conference called Security and Watermarking of Multimedia Contents IV, in San Jose, Calif. The paper was written by Delp, Purdue doctoral student Eugene T. Lin and Christine Podilchuk, a research engineer at Lucent Technologies/Bell Laboratories.
The technique aims to benefit anyone who wants to stream video content over the Internet, from broadcasters to law enforcement officials.
"Perhaps a surveillance camera is looking at a scene," Delp said. "Instead of sending it through some sort of broadcast medium, it could be sent over the Internet."
The resynchronization technique also might have applications in deciphering hidden watermark messages incorporated into Internet video for illicit purposes, including communications among terrorists. Digital media can be screened for hidden terrorist messages by using "steganalysis," which will be the subject of a special session during the conference, Delp said.
Because digital video will become more common over the next five years, techniques will be needed soon to maintain the quality of video as it is transmitted over the Web.
"The network is overloaded, and you get dropouts, or missing data," Delp said. "You can get enough errors to make the video almost unwatchable."
Standard "error correction" techniques used to recover lost data in e-mail transmissions do not work for video and audio transmissions.
The researchers have created a computer algorithm a series of steps that enable a computer to complete a task that promises to solve the problem.
"We have developed a mechanism under which, at the receiving end, you will be able to correct the errors that occurred in the Internet transmission and be able to recover the hidden messages in video," Delp said.
Video relies on critical split-second timing for individual frames that follow one another. The precise, rapid firing of 30 frames per second creates the illusion of continuous movement.
As video is transmitted over the Internet, however, it is difficult to maintain this delicate timing.
The new technique resynchronizes the transmission at the receiving end, fixing errors that throw off the timing and result in lost information.
The very nature of video makes watermarking difficult.
"You are presented with 30 frames per second, and your eye integrates those to make it look like continuous motion," Delp said. "But your eye is real susceptible to any subtle changes in the images from one frame to the next.
"If you don't properly put the watermark into the video, you can see the watermarked image, which is not supposed to be visible."
To properly embed watermarks into video, the Purdue researchers use a computer program that mimics how people see video.
"We exploit a human visual system model that tells us what you can see and what you can't see when its moving," Delp said.
The technique might be used to improve the quality of Internet video. "But the research is not focused in this direction yet," he said.
The Purdue researchers have been working on digital watermarking for more than five years, and they began working on developing techniques for video about two years ago.
"We are showing that our technique is probably better than any of the other current techniques for watermarking resynchronization," Delp said. "But to solve the entire problem to my satisfaction is going to take almost two more years."
Writer: Emil Venere, (765) 494-4709, email@example.com
Source: Edward Delp, (765) 494-1740, firstname.lastname@example.org
Purdue News Service: (765) 494-2096; email@example.com
NOTE TO JOURNALISTS: A copy of the research paper referred to in this news release is available from Emil Venere, (765) 494-4709, firstname.lastname@example.org.
Temporal synchronization in video watermarking
E.T. Lin, E.J. Delp III, Purdue University; C.I. Podilchuk , Lucent Technologies/Bell Labs
Synchronization is a significant issue for the reliable detection of many video watermarks. In addition to initial synchronization, which is performed when the detector begins analyzing a video signal, resynchronization may be necessary if the video is corrupted by errors, such as by being transmitted over an error-prone network. The watermarked video may also be attacked by synchronization attacks, such as rescaling, cropping and temporal resampling. Achieving temporal and spatial synchronization without computationally expensive search (such as a sliding correlator) can be a challenge.
One method for fast synchronization is the embedding of templates. However, templates are designed to be easily detected, and hence, they are vulnerable to being removed. Templates can also affect the perceptual transparency of the watermarked video. In this paper, we examine a method for temporal synchronization that does not require the embedding of templates. This method analyzes the watermarked video and extracts features that re unlikely to change in value unless significant alteration to the watermarked video occurs. The features are then used as inputs to a state machine that generates the information necessary to achieve and maintain temporal synchronization.