Coronavirus (COVID-19) information: Live classes available in-person or online from our fully vaccinated instructors. Learn more about our reopening.


After Effects tutorial: Understanding video formats in After Effects

What you’ll learn in this After Effects Tutorial:

  • Understanding Video Formats

This tutorial provides you with a foundation for working with Adobe After Effects video formats. It is the first lesson in the Adobe After Effects CS6 Digital Classroom book. For more Adobe After Effects training options, visit AGI’s After Effects Classes.

Adobe After Effects Tutorial: Understanding video formats in After Effects

Some video formats are common for professional video production, while others are suitable only for broadband or small-screen purposes. There are two main standards used for broadcast television, a handful of competing standards for desktop and web video, and a series of device-specific standards used in mobile handheld devices. Technical standards, such as the ones touched upon here, are very complex, and a full description of each one is beyond the scope of this book. In general, regardless of the platform for which you are creating video content, there are three main properties to keep in mind:

Dimensions: This property specifies the pixel dimensions of a video file—the number of pixels horizontally and vertically that make up an image or video frame. This value is usually written as a pair of numbers separated by an x, where the first number is the horizontal value and the second represents the vertical value, such as 720×480. The term Pixel is a combination of the words picture and element and is the smallest individual component in a digital image. Whether you are dealing with a still image or working with video frames makes no difference; everything displayed on-screen is made up of pixels. The dimensions of a video or still image file determine its aspect ratio; that is, the proportion of an image’s horizontal units to its vertical ones. Usually written in the following format: horizontal units:vertical units, the two most common aspect ratios seen in current video displays are 4:3 and 16:9.

Frame rate: This property specifies the number of individual images that make up each second of video. Frame rate is measured as a value of fps, which is an acronym that stands for frames per second.

Pixel aspect ratio: This property specifies the shape of the pixels that make up an image. Pixels are the smallest part of a digital image, and different display devices such as televisions and computer monitors have pixels with different horizontal and vertical proportions.

When producing graphics for broadcast television, you have to conform to a specific set of formats and standards. For example, you need to know whether your graphics will be displayed on high-definition screens (1080i, 1080p, 720p), standard-definition screens, or mobile devices because this affects the size that you must create your graphics at. Similarly, you need to know whether you’re in a region that broadcasts using the ATSC (often still called NTSC) or PAL standards, as this affects both the size you can create your graphics at, and the frame rate and pixel aspect ratio you will need to use. If you are producing animation or video for the Web, you’ll need to know the format that the distributing site will be using: Flash, Silverlight, h.264, or other, since certain video effects don’t work well when exported to certain formats.


In the United States, the ATSC, or Advanced Television Systems Committee, has issued a set of standards for the transmission of digital television. These standards have replaced the older, analog NTSC (National Television Standards Committee) formats. The standards embraced by the ATSC include standard-definition and high-definition display resolutions, aspect ratios, and frame rates. All broadcast video and graphics must conform to one of the ATSC standards. Information on the various ATSC standards is available on their website at

High-definition television

While high-definition (HD) television technology has existed for decades, it wasn’t until the beginning of the 21st century that it came to the attention of the average American television viewer. The term HD is used to describe video that has a higher resolution than traditional television systems, which are called SD, or standard definition. There are two main high-definition standards for broadcast television—720p and 1080i—while many televisions, gaming consoles (Playstation 3, Xbox 360, and more) and Blu-ray disc players can support a third, 1080p. The letters p and I refer to whether the format uses a progressive or an interlaced display method. Interlacing divides each frame of video into two separate fields. When combined, these two fields form a single video frame that shows a single image. Progressive display forgoes fields and has each individual frame as its own unique image. In general, progressive displays are clearer and better defined, while interlaced displays require less broadcast bandwidth to be transmitted to the viewer. Most modern video cameras allow the user to choose whether to record in a progressive or interlaced format.

720p: The 720p format has a resolution of 1280 pixels wide by 720 pixels high and supports a variety of frame rates, from the 24 fps used by film, through the 30 fps that was part of the old NTSC standard, all the way up to 60 fps.

1080p and 1080i: The 1080 formats come in both progressive and interlaced versions and, like other modern digital standards; they support a variety of frame rates between 24 fps and 30 fps.


You will learn more about the differences between progressive display and interlacing later in this lesson.

These tutorials are created by and the team of expert instructors at American Graphics Institute.