› Premiere Pro tutorial: Understanding Digital Video in Premiere Pro

Premiere Pro tutorial: Understanding Digital Video in Premiere Pro

What you’ll learn in this Premiere Pro Tutorial:

  • Understanding video settings
  • Premiere Pro presets

This tutorial provides you with a foundation for working with video in Adobe Premiere Pro. It is the first lesson in the Adobe Premiere Pro CS5 Digital Classroom book. For more Adobe Premiere Pro training options, visit AGI’s Premiere Pro Classes.

Adobe Premiere Pro Tutorial: Understanding Digital Video in Premiere Pro

Before you begin editing in Premiere Pro it is beneficial to become familiar with some of the concepts and principles that define the art and craft of video editing.

Starting up

You will not need any files for this lesson.

Understanding digital Non-Linear Editing

Premiere Pro is a digital NLE (Non-Linear Editor) that gives you direct and immediate access to any frame in a digital video clip at any time. In an NLE process, you use computer data instead of a physical linear medium, such as film or tape, and you can jump back and forth along your timeline at any point in the editing process. Unlike traditional graphic image processes, this is a non-destructive process because the original source footage is never lost or altered. The media links that you import or capture in Premiere Pro are only references to the original footage stored on your hard drive.

The video and audio footage that you edit in Premiere Pro can be digitized from an analog source, such as a VHS or cassette tape, or recorded directly to a digital format, as is the case with modern video cameras as well as other hard drive and compact flash-based recording devices.

Understanding video settings

In Premiere Pro, you generally work by building sequences to match the standards of the media you are going to work with instead of the intended output. There are many types of video files you can work with in Premiere Pro. The various formats, aspects ratios, codecs, and other settings used to describe video files will be explained in this book. The following terms will help you:

Dimensions: specifies the pixel dimensions of a video file; in other words, the number of pixels horizontally and vertically that compose an image or video frame. This value is usually written as a pair of numbers separated by an X, where the first number is the horizontal value and the second represents the vertical; for example, 720 × 480. Pixel is a conjunction of the words “picture” and “element” and is the smallest individual component in a digital image.

Frame rate: specifies the number of individual images that compose each second of video. Frame rate is displayed as a value of fps (frames per second).

Pixel Aspect Ratio: specifies the shape of the pixels that compose an image. Pixels are the smallest part of a digital image and different display devices (televisions, computer monitors, etc.) have pixels with different horizontal and vertical proportions.

Editing vs. delivery formats

While working through the many lessons presented in this text you are going to encounter many new concepts and terms, especially when you are dealing with video footage. When working with video in Premiere Pro you will encounter many different video formats and the codecs (compressor/decompressor) used to compress them while editing the video files throughout the lessons in this book and when working on your own. Most computer users are familiar with the term “format” used to describe files, for example, jpeg and tiff formats for images, or doc and ppt formats for Word and PowerPoint files. However, for video files, formats such as Quicktime and AVI are only containers that hold video information; much like a file folder holds paper. You can describe the relationship between formats and codecs this way: formats are the containers; codecs are the language the contents are written in.

Codec is a conjunction made from the words “compressor” and “decompressor.” Codecs are mathematical algorithms used to shrink audio and video files to manageable sizes. Video files are very large; for example, 20 minutes of NTSC DV video (from a standard definition miniDV camcorder) is over 4 GB in size, which is the capacity of one single-layer DVD. Without video codecs, you could not easily save and store archived video footage; video would also never be small enough to watch online, by e-mail, or on a mobile device. To view an audio or video file, you must have a player compatible with the format and have on your computer the codec used to compress the video file, so you can decompress it.

Traditionally, the codecs used to capture and edit footage were uncompressed or offered very little compression, while the codecs used to deliver the final files to the viewers were far more compressed. With the use of high-definition video equipment this has changed: many cameras now use MPEG-2 or MPEG-4 codecs to reduce files to manageable sizes for storage and editing.

High definition vs. standard definition

Standard definition footage adheres to the NTSC (National Television Standards Committee) or PAL (Phase Alternating Line) standards, which are the standards for video used in the United States of America and Europe, respectively. Standard definition footage has an aspect ratio of 4:3; in other words, there are four horizontal units for every three vertical units of measure. Prior to the invention of high-definition television, NTSC was the only standard for broadcast video in the United States. NTSC included settings for both 4:3 and 16:9 aspect ratios. In the age of Digital Television broadcasts, the NTSC has been replaced by the ATSC (Advanced Television Systems Committee) standards; however, the term NTSC is still used by most video cameras, editing, and graphics applications to refer to standard definition broadcast quality video.

There are other aspect ratios used in video,
but 4:3 and 16:9 are the most common.

NTSC & NTSC Widescreen: The NTSC presets include settings for standard (4:3) and widescreen (16:9) aspect ratios. The dimensions for both are 720 x 480, but the pixel aspect ratio is different, which accounts for the difference in shape. Pixel aspect ratio (PAR) is the ratio or shape of the pixels that compose each part of a single video frame. Both versions of the NTSC standard use a frame rate of 30 fps.

PAL & PAL Widescreen: PAL is the standard for broadcast television used throughout Europe and much of the rest of the world. PAL differs from NTSC in several key ways; such as dimensions and frame rate. It uses a frame rate of 25 fps, which is closer to the 24fps used in film and according to some video professional produces more realistic imagery. Similar to NTSC, PAL has standard (4:3) and widescreen (16:9) settings. A frame size of 720 × 576 is shared by both, and the pixel aspect ratio gives each their unique frame shape.

High Definition: High Definition (HD) television technology has existed for decades, but it was during the early 21st century that it became popular with average American television viewers. The term HD describes video with a higher resolution than traditional television systems, now called SD or standard definition. There are two main high definition standards for broadcast television: 720P and 1080i; many televisions and Blu-ray disk players support a third standard: 1080P.

720P: the 720P format has a resolution of 1280 pixels wide by 720 pixels high and supports a variety of frame rates, from the 24 fps used by film and the 30 fps that was part of the old NTSC standard, to 60 fps.

1080P & 1080I: the 1080 formats exist in Interlaced and Progressive versions; as with other modern digital standards, these formats support a variety of frame rates between 24 fps to 30 fps.

Progressive vs. interlaced

The two methods of displaying images on a video screen are Progressive display and Interlacing. In the United States, and before changing to a digital broadcasting system, televised images were sent as interlaced signals in which every frame of video was made by combining two half-images called fields.

Before the advent of high-definition LCD and Plasma screens, televisions were made by wrapping a plastic or wooden frame around a large glass device called a Cathode Ray Tube (CRT). These CRT television screens were composed of a series of even and odd numbered lines called scan lines, and each frame of video was displayed by illuminating these lines starting at the top of the screen. Interlacing was created to display video signals on this type of TV set and worked by illuminating one set of lines first (even or odd numbered), and then moving back to the top of the display to illuminate the other set. In this way, the display would show the second set of lines when the first set of began to fade; the result was a complete picture for the viewer. This process occurred 60 times a second with NTSC broadcast television. Unlike Interlacing, Progressive display illuminates the scan lines sequentially from top to bottom.

Most modern televisions can display in interlaced and progressive mode, and the ATSC includes broadcast standards for both, while all computer monitors use progressive display only. The difference between the two display methods occurs in video camera formats as well; older NTSC or PAL cameras can only shoot interlaced video, but many newer cameras let you choose between interlaced and progressive shooting modes, for example, 50i (25 fps), 60i (30 fps), 30p (30 fps), and 24p (24 fps). When working in Premiere Pro, we highly recommend that you use the sequence settings that match the settings of the footage you are working with.

Pages

These tutorials are created by and the team of expert instructors at American Graphics Institute.