Video Editing Training: Essentail Skills
Successfully learning video editing requires that you understand some technical components of digital video. If you don’t understand these, you’ll merely be pushing buttons and clicking checkboxes. Before starting a video editing course it is useful to take a few minutes to give yourself the foundational knowledge regarding digital video.
When working in video editing, whether creating your first project as part of a video editing class, or as a long-time professional, you will want to consider the final destination for your project. Will it be used on television, in video, on a mobile device? Knowing this information allows you to accurately create your video content to match your intended destination. Projects for high-definition broadcast television differ from those for a portable device with a small-screen or from those produced solely for display on a computer screen. Each of these media has its own standards for items, such as frame rate, aspect ratio, and bit rate. Understanding these items saves you time and effort in the production process.
Understanding video formats used in video editing classes
Some video formats are common for professional video production, while others are suitable only for broadband or small-screen purposes. Because video editing tools are used to create content that is shared across different formats, it is useful to understand these and we discuss the most common within the video editing classes at AGI. There are two main standards used for world–wide broadcast television, a handful of competing standards for desktop and web video, and a series of device-specific standards used in mobile handheld devices. Technical standards, such as the ones touched upon here, are very complex. In general, regardless of the platform for which you are creating video content, there are three main properties to keep in mind:
Dimensions: This property specifies the pixel dimensions of a video file that you will work with when performing video editing and eventually export. The pixel dimensions are the number of pixels horizontally and vertically that make up an image or video frame. This value is usually written as a pair of numbers separated by an x, where the first number is the horizontal value and the second represents the vertical value, such as 720 × 480. The term Pixel is a combination of the words picture and element and is the smallest individual component in a digital image. Whether you are dealing with a still image or working with video frames makes no difference; everything displayed on-screen is made up of pixels. The dimensions of a video or still image file determine its aspect ratio; that is, the proportion of an image’s horizontal units to its vertical ones. Usually written in the following format: horizontal units:vertical units, the two most common aspect ratios seen in current video displays are 4:3 and 16:9.
Frame rate: This property specifies the number of individual images that make up each second of video. Frame rate is measured as a value of fps, which is an acronym that stands for frames per second.
Pixel aspect ratio: This property specifies the shape of the pixels that make up an image. Pixels are the smallest part of a digital image, and different display devices, such as televisions and computer monitors, have pixels with different horizontal and vertical proportions.
When using a video editing program to produce graphics for broadcast television, you have to conform to a specific set of formats and standards. For example, you need to know whether your graphics will be displayed on high-definition screens (1080i, 1080p, 720p), standard-definition screens, or mobile devices because this affects the size that you must create your graphics at. Similarly, you need to know whether you’re in a region that broadcasts using the ATSC (often still called NTSC) or PAL standards, since this affects both the size you can create your graphics at, and the frame rate and pixel aspect ratio you will need to use. If you are producing animation or video for the Web, you’ll need to know the format that the distributing site will use: Flash, Silverlight, H.264, or other, since certain video effects don’t work well when exported to certain formats.
ATSC
In the United States, the ATSC, or Advanced Television Systems Committee, has issued a set of standards for the transmission of digital television. These standards have replaced the older, analog NTSC (National Television Standards Committee) formats. The standards embraced by the ATSC include standard-definition and high-definition display resolutions, aspect ratios, and frame rates. All broadcast video and graphics must conform to one of the ATSC standards. Information on the various ATSC standards is available on their website at ATSC.org.
High-definition television
While high-definition (HD) television technology has existed for decades, it wasn’t until the beginning of the 21st century that it came to the attention of the average American television viewer. You can create HD content in all video editing tools and share it with companion apps such as Premiere Pro. The term HD is used to describe video that has a higher resolution than traditional television systems, which are called SD, or standard definition. There several high-definition standards for broadcast television, including 720p, 1080i, and 4K, with some televisions, gaming consoles (Playstation 3, Xbox 360, and more) and Blu-ray disc players supporting 1080p. The letters p and I refer to whether the format uses a progressive or an interlaced display method. Interlacing divides each frame of video into two separate fields. When combined, these two fields form a single video frame that shows a single image. Progressive display forgoes fields and has each individual frame as its own unique image. In general, progressive displays are clearer and better defined, while interlaced displays require less broadcast bandwidth to be transmitted to the viewer. Most modern video cameras allow the user to choose whether to record in a progressive or interlaced format. The difference between HD and Stardard Definition video editing workflow are covered within video editing classes.
Standard-definition television
Contrary to some beliefs, standard definition footage is still in use today. Simply compare the number of cable television channels that are available in high definition to those that are not. Prior to the invention of high definition, there was only one broadcast standard in the United States, NTSC (National Television Systems Committee), which included settings for the display of video in both 4:3 and 16:9 aspect ratios. While technically it has been replaced by the ATSC standards, the term NTSC is still used by most video cameras as well as many editing and graphics applications when referring to standard-definition, broadcast-quality video.
NTSC and NTSC widescreen: Graphics applications designed to produce content for broadcast, such as Premiere Pro, After Effects, and Final Cut Pro and more, include pre-built settings for creating video projects called presets that correspond with the most commonly used broadcast standards. The NTSC presets include settings for both a standard (4:3) and widescreen (16:9) aspect ratio. They use the same dimensions, 720 × 480, but different pixel aspect ratios, and this is what accounts for the difference in shape. Devices that comply with the NTSC standard use a frame rate of 29.97 frames per second.
PAL
PAL, or Phase Alternating Line, is the standard for broadcast television used throughout Europe and much of the rest of the world outside of North America. PAL differs from NTSC in several key ways, including dimensions and frame rate. It uses a frame rate of 25 fps, which is closer to the 24 fps used in film, and like NTSC, it has both a standard and widescreen setting.
PAL and PAL widescreen: In applications such as Premiere Pro, After Effects, and Final Cut the PAL presets include both a standard (4:3) and a widescreen (16:9) aspect ratio. Much like their NTSC equivalents, they use the same pixel dimensions; in this case, 720 × 576, but each have different pixel aspect ratios.
Video Editing for web and mobile devices
Although there is no single standard for video on the Web or on mobile devices, there is only a handful of competing audio/video formats, and After Effects, Premiere Pro, or Final Cut Pro can be used to create and share content using these different formats. Creating video for Internet distribution is a part of the video editing courses offered at AGI. QuickTime, Windows Media Video and H.264 are the main video formats. The QuickTime format is controlled by Apple Inc., and for years was the de facto standard for web-delivered video. The freely available QuickTime Player is compatible with both Windows and Mac OS and is used to view QuickTime Movie (.MOV) and other video file formats. QuickTime format video is also supported on some mobile devices; most notably the Apple suite of phones, iPods and iPads.
Windows Media Video, often called WMV, is the Microsoft standard made by the creators of the Windows operating system. A variation of WMV is used for Silverlight video, which is widely used by many professional media organizations, including NBC Sports for their live Olympics coverage and Netflix for streaming videos. Windows Media is also a supported format on some multimedia players and mobile devices, such as Windows phones.
Even outside of web video distribution, H.264 is a standard for high-definition video compression on a variety of platforms and devices. This format was created by MPEG LA and derived from the MPEG-4 standard while OGG Theora is its open source alternative currently controlled by Google. Mobile devices, such as the Apple iPod, Sony PlayStation vita, Windows and Android tablets, and some HTML5-compliant browsers, support variations of H.264, along with many mobile phones and third-party video playback applications, such as QuickTime Player, Flash Player, and the VLC Media Player. As with all technology, the mobile video market is constantly changing and developing. As these standards evolve and grow, browser and device support will fluctuate.
Video editing for film
In addition to its use in producing television and web video, video editing classes cover how Premire Pro, Final Cut, and After Effects contain presets intended to be used in film post-production as well. These applications can import and output digital video at both 2K and 4K resolutions. 4K is the term used to describe video that has a resolution above 4000 horizontal pixels; this is more than double the size of 1080P high definition footage. This means you can produce visual effects and graphics that are on par with high quality film productions.
Understanding frame rate and resolution for video editing
Video is essentially a series of individual still images that are displayed very quickly, one after the other. The frame rate of video is measured by the number of frames recorded or played back each second, and it is denoted as fps, an acronym that stands for frames per second. Different video standards have different frame rates, and many video standards support a variety of different frame rates. As a comparison, American television is broadcast at 30 fps, PAL uses 25 fps, and film uses a frame rate of 24 fps. The concepts of frame rate and resolution are covered as part of the video editing courses.
If you have a background in graphic design, you might be familiar with the term resolution, which refers to the pixel density or the number of pixels in a given space. As such, in North America, resolution is denoted in pixels per inch or ppi. For example, images created for printing in high-quality magazines are usually 300 ppi, while images created for use on a web site usually have a resolution of 72 ppi. When working with video, ppi is not used to address resolution. When discussing video, the term resolution is used to refer to the pixel dimensions of an image: the number of horizontal and vertical pixels that make up the actual image. When creating graphics for the Web, these pixel dimensions determine the relative size of content to the overall video frame size.
Graphics that are used in video are created using the RGB color mode. Each individual pixel is assigned a unique color value consisting of combinations of red, green, and blue. Each of these colors is saved to its own color channel. When colors are combined, the composite (a full color image) is created. In addition to the color channels of an image, some formats can also contain an additional channel that holds information about the areas of an image that are transparent. This channel is called the alpha channel. If you also work in Photoshop, you might already be familiar with alpha channels, although the meaning of an alpha channel in video is somewhat different. In Photoshop, any saved selection is called an alpha channel, and you can have up to 99 alpha channels. In video editing classes participants learn that in After Effects, Premiere Pro, and Final Cut - as with all video editing applications, the term alpha channel refers specifically to the transparency of a still image or video file. Alpha channels use the 256 shades of gray to represent transparency. When looking at an alpha channel in most applications, black pixels represent those that are fully transparent, white pixels are fully opaque, and gray pixels represent semi-transparent areas. Only some image and video formats support saving alpha channels along with the other image information. Commonly used file formats that can include alpha channels are: Tagged Image File Format (.tiff ), TARGA (Truevision Advanced Raster Graphics Adapter, .tga), PNG (Portable Network Graphic), QuickTime (.mov), and AVI (Audio Video Interleave). Alpha channels are automatically created for the transparent areas of native Photoshop and Illustrator files when they are imported into an After Effects project