-
Notifications
You must be signed in to change notification settings - Fork 14
2.Introduction
After a successful installation, now you are ready to datamosh your videos. First of all if your are an absolute beginner to datamoshing, then I will recommend you to read this introduction part.
Datamoshing is the process of manipulating the data of media files in order to achieve visual or auditory effects when the file is decoded. Specially popular in the glitch art community, this concept is there from a very long time.
It is the act of removing and or replacing the I-frames and or P-frames from a compressed video datastream causing the playback image and motion-vector data to distort with unpredictable & glitched results resulting in trippy visuals.
It is also recognised as a bug in media files due to some data/signal lost.
Datamoshing became a quiet popular glitch effect when people tried experimenting with videos with intentional corruption techniques. But this effect is not easy to achieve manually as video files are very complicated. People generally use avidemux which is a video editor though it is not meant for datamoshing. It was just a bug in a specific version and is also outdated, but now its time for some real datamoshing software.
Before proceeding, it is recommended to have some basic idea of some terms of a video file
An I-frame (Intra frame) are key frames which you can understand as the complete preservation of a frame; only this frame of data is needed for decoding because it contains the complete picture. Simply it is a compressed frame that doesn’t depend on the contents of any earlier decoded frame.
In interframe video coding, a delta frame is a "difference" frame that provides an incremental change from the previous frame. Delta frames can be P-frames or B-frames
A P‑frame ('Predicted frame') holds only the changes in the image from the previous frame. It allows macroblocks to be compressed using temporal prediction in addition to spatial prediction. For motion estimation, P-frames use frames that have been previously encoded. P-frames follow I-frames and contain only the data that have changed from the preceding I-frame (such as color or content changes).
The I-frame interval configures the number of partial frames (P-Frames) that occur between full frames (I-Frames) in the video stream.
For example, in a scene where a door opens and a person walks through, only the movements of the door and the person are stored by the video encoder. The stationary background that occurs in the previous partial frames is not encoded because no changes occurred in that part of the scene. The stationary background is only encoded in the full frames. Partial frames improve video compression rates by reducing the size of the video.
A B‑frame ('Bi-predictive picture') is a frame that can refer to frames that occur both before and after it. It saves even more space by using differences between the current frame and both the preceding and following frames to specify its content. However, B-frames are resource-heavy.
The term ‘codec’ is short for ‘code-decode’. In relation to video, a codec encodes (compresses) video for storage or live transmission, then decodes (decompresses) the video for viewing by reversing the encode process. A codec processes raw digital video and stores it in a stream of bytes.
A video container is also known as a video wrapper or video format. Containers are often referred to as their file extension name, such as ‘.mov’ or ‘.mp4’. They contain the data from several elements that make up a video file including the video codec data. In addition, they contain audio codec data, metadata and other ancillary data, such as closed captions.
This will be enough for basic understanding, you can do more research on this.