Most of the discussion on Web page design has been focused toward static development-graphics that don't change over time. But there is much more that you can integrate at your site given the appropriate resources. Chapter 5 "Using Multimedia and Special Effects on the Web," touches on some ways of adding various multimedia elements using Shockwave and Java, but this chapter talks about animation and video via the Web, aside from those created with Shockwave and Java.
The biggest difficulty with Web animation and video is the bandwidth issue. In other words, how wide and how fast is the pipe that is feeding your machine? To get an understanding of why it is an issue, you have to look inside the digital animation and video files to understand why the pipe limits what can be delivered efficiently and effectively.
This chapter looks at the basics of animation and video: how they are created. Then it covers the four ways animation and video can be integrated on the Web. These include client pull, server push, streaming via plug-ins, and externally linked files. It also looks at some of the existing digital video formats that can be used with external helper applications.
The concern of file size that applies to static graphics also applies to animations and video. Okay, let's get real. Everything that is delivered via the Web is primarily an issue of file size-how long it is going to take to download. No matter what you are downloading, much more than a couple hundred kilobytes will leave you wishing you hadn't started.
With animation and video, this presents a significant problem and here's why. Animations are nothing more than a series of singular images that are sewed together. Each image has slightly different iteration of the portrayed elements, which gives the appearance of movement over time. Each additional image or frame adds to the file weight (which translates to user wait). The larger the file, the larger the weight (and wait) no matter how you are accessing the Web. With uncompressed video, the addition of audio approximately doubles the file size. Remember that a 1MB video, animation, or graphic file will take around 10 minutes to download for your dial-up access users with a 14.4 modem. So your first concern is the size of the file, but what affects that size?
With animation and the graphic portion of video, there are three things that affect the digital file size: the image dimensions, the number of frames, and the bit depth of the images, as shown in Figure 18.1.
The image size is the biggest contributor to the size of the animation or video file. With Web animation or video, developers are always talking about a very small portion of the screen. Surfers don't expect Web animations and video to be much more than 160¥120 pixels. Why? Well, let's look at what is known about the graphics you've created thus far.
A single Graphical Interchange Format (GIF) image at 640¥480 pixels, even at 256 colors, is approximately 100KB. Multiply that by 50 or so frames, and you've just overshot your allotment by 4900KB. So when you deliver animations or video they must be small (less than or equal to 160¥120 pixels). Take a look at Figure 18.2 to see the relationship of sizes.
The second concern is the number of frames in the animation or video. The number of frames is like the number of singular images in the file. This can be a significant contributor if there is no compression used in the particular digital file format. Most animations delivered over the Web are less than 200 frames at 160¥120, and most video with audio is less than 100 frames at 160¥120. This allows you to hit the respective 200KB or 100KB file size parameter.
When discussing frames, you must also consider the rate at which those frames are played back. The rates are significantly less than what you might see on a television or in a video studio. Frame rates in these instances are usually 30 frames per second at 60 fields per second. But what about with Web video? Hardly! Usually Web video, which may play sporadically at the low end, ranges from 5 to 15 frames per second. Remember that you must work within your capable means.
The final contributor to file size is that of image bit depth
in the animation or video. Remember that bit depth simply describes
how much data is allotted to describe a single instance in the
file. The higher the bit depth, the more descriptive the digital
sample and the more representative it is of the original analog
source.
Note |
Graphics, video, and audio all have a bit depth. When you scan an image, capture video, or digitally record audio, you are transferring the object from analog to digital. This is a process known as sampling. This sampling process occurs in small pieces. Take an analog chunk, digitally convert it, and digitally write it down. Take another chunk, convert it, write it down, and so on. The frequency with which you take these chunks is called the sampling rate. The more frequent the chunks (the higher the sampling rate), the better the digital representation of the analog source. Once you have a chunk, you have to describe it digitally. The more digital bits you can use to describe each chunk, the more the final digital representation will look or sound like the original analog chunk. Thus, the higher the bit depth and the sampling rate, the more representative the digital image, sound, or video clip are of their analog counterpart. Note, however, that you're only as strong as your weakest link. A high sampling rate with a low bit depth gives you more frequent samples, but a poor description of those samples. A high bit depth and a low sampling rate give very detailed descriptions of infrequent samples. The best-case scenario is to sample at your highest capability and then digitally resample to the desired deliverable. This is why we suggest working in 24 bit (if possible) and then changing the mode to 8 bit (which is resampling, so to speak). |
The maximum bit depth for an animation or video file is dependent on the compression algorithm that is used. Some algorithms allow higher bit depths, which in turn give you better-looking playback. Others restrict you to a particular bit depth per certain circumstances. A later section shows how compression affects these two variables and the file size.
The beginning of this chapter has implied that animation and video are somehow different. Well, in fact, in the minds of most people, they are. Definitions across the board vary so let's give you ours.
Generally, the most distinct difference is that animation has been painstakingly created, either via a three-dimensional model or two-dimensional painting and keying techniques. Video, on the other hand, is captured via a still camera, recorder, or other device and then manipulated. Most often animations require greater time input.
The term digital video normally connotes that there is sound involved within the particular file, while animation may or may not have sound. Concerning animation, it usually depends on your chosen file flavor. For example, animations stored in Autodesk's .flc (flic) format will not have audio, while an animation in .mov, .avi, or .mpg may have.
When discussing animation and video with someone else, it may be a good idea to clarify what you mean when you say animation and video. To many there is a difference (see Figure 18.3).
There are several methods of utilizing animations on the Web. Let's look at each a little closer. Realize that animation and video effects occur either internally in Netscape (or with the help of a plug-in) or externally via a helper application. If you don't have Netscape, this section may not be for you, but you may be able to find plug-ins or nifty features like pull or push for your browser if you look in the right Web spots.
Client pull is a feature integrated into Netscape that allows you to program your pages so that one page automatically jumps to another after a specific amount of time-without the intervention of the user. Due to download time, however, client pull barely classifies as animation; it's pretty sporadic. Some would argue that it is not animation, but I have seen some pretty good animated effects pulled off with client pull. Note that client pull is most effective on pages with very few graphics (see Figure 18.4).
So why use client pull? If it's only good for text pages, why include it in a book about graphics? Well, the idea behind this feature is to allow a quick splash screen to appear when the user first enters your site. It would then send them to the first real page of the site. It was intended to be used much like splash screens that appear when you execute applications like Adobe Photoshop or Premiere (see Figure 18.5). Keep in mind, nonetheless, that it will only work in Netscape. If a browser other than Netscape sees the code, it will ignore it, so make sure there's an escape link on the page. If not, non-Netscape users will be stopped dead in their tracks.
Client pull is nice if used sparingly. Users can become uncomfortable if they lose control of their browser for extended periods of time-one of our major interface design concerns. Several pages strewn together via client pull is a real pain, not to mention having to backtrack through those pages.
Although it can be used effectively, client pull does have disadvantages. Each automatic jump from one page to another requires downloading of the new page. During this time you may notice a momentary flash of browser gray. Depending on the access route and speed of the machine, the flash length varies (see Figure 18.6). Keep this in mind if you decide to use this technique.
The code for client pull is relatively simple and does not require any additional software. The information for client pull is stored in the <HEAD> tag at the beginning of the Hypertext Markup Language (HTML) code. Basically, a simple client pull would look like this:
<HTML> <HEAD> <META HTTP-EQUIV="Refresh" CONTENT="1; ÂURL=http://my.site.com/my_real_1st_page.html"> </HEAD> <IMG SRC="mypict1.gif" </HTML>
Huh? Let's look a little closer at the code you just cranked:
As you can see, it would be possible to create multiple pages that jump from one to another. But to be honest, multiple page jumping will simply frustrate the user. The constant flashing that occurs while the next page is downloaded is irritating as well. Be careful to not overuse this feature!
For most of the animation effects that you'll want to do, you probably don't want to animate and redraw the entire screen as with client pull. Alternatively, server push refreshes only the elements that change. Usually small bullets or dancing elements (called sprites) are good uses for server push (see Figure 18.7). Chapter 5talks about how many Shocked sites have dancing bullets and small animated elements. This is an alternative to server push, but Shockwave is not the only way to achieve this effect. You can achieve it with Netscape extensions as well.
To use server push, CGI scripts must be talked about again. Much like image maps, server push uses external CGI scripts to process the animated imagery that occurs on the screen. For most of us, including myself, crunching CGI scripts is not what we like to do. Several places on the Web distribute public domain CGI scripts for server push. For example, look at the following sites for CGI scripts that you can use:
http://www59.metronet.com/cgi http://www.worldwidemart.com/scripts http://www2.eff.org/~erict/Scripts/ http://128.172.69.106:8080/cgi-bin/cgis.html
Using most of these CGI scripts simply requires you to make a listing of the images associated with the sprite and the order in which they appear. Each is a little different, but most come with brief documentation so you can begin using them pretty quickly.
One of the most promising means of delivering animation and video over the Web is via plug-ins. Plug-ins like Apple's QuickTime and Microsoft's Video for Windows add a deeper level of possibility on the Web. These plug-ins, like the RealAudio plug-in, allow data to be streamed across the Net and played back in real time. Streaming simply sends small compressed portions of the file while concurrently beginning playback. It works pretty well right now, but as hardware capability increases you'll see more and more of it.
Many streaming plug-ins for audio and video are still under construction, but in the near future you'll see many more available. Keep your eye on Apple's and Microsoft's Web sites for news about plug-ins for video and animation (www.apple.com and www.microsoft.com).
The most common method for distributing animation and video over the Net is through links to external files. In this scenario, the file is completely downloaded and then played through a helper application (see Figure 18.8). The only restriction is that the user must have a player or viewer capable of opening the animation or video file once it is on his machine.
Due to limited bandwidth, this is probably the best way to distribute your animation and video files to your audience. The same procedure you used for high-resolution images in Chapter 7 "Designing Graphical Pages Anyone Can Download," will be used here. To integrate an external file, create code that looks like this:
<A HREF="mymovie.mov">Click here to download the movie</A>
In the previous coding example
Most of the time it is advantageous to show the user a representative image from the movie they will be downloading (see Figure 18.9). This helps them determine if they want to download or not. The code would look like this:
<A HREF="mymovie.mov"><IMG SRC="mymovie.gif"> Click here to download the movie</A>
In the previous coding example
Tip |
Whenever you distribute an external file that requires a Multipurpose Internet mail extension (MIME) type and helper application, provide a link for the user to get the appropriate helper application so he can view your file. For example, in addition to a link to your QuickTime file, provide a link to download the player for QuickTime. |
Most animation and video files utilize compression to help reduce file size. As you found with Joint Picture Expert's Group (JPEG) images, some image data is discarded to help maintain smaller file sizes. The data that is discarded is either redundant or acceptably lost.
As with JPEG images, digital animation and video files are always
compressed for distribution. However, the compression occurs over
a range of images rather than within a single image. All compression
techniques use an algorithm called a compressor/decompressor (codec).
The amount that a particular image compresses with a codec is
called the compression ratio-a ratio of sizes before and after.
Compression ratios range from 2:1 to 200:1.
Note |
When discussing digital compression, realize there are three main levels of compression that can occur: internal file compression, external file compression, and drive compression. Each functions a little differently. All the compression talked about in this chapter is internal file compression. JPEG, LZW (Lempel-ZivWelch), and packbits are internal file compression; occurring inside the particular image file. The compression scheme is knowledgeable about the data in the file-which is what allows it to compress at all. Knowing what type of data to drop or substitute for is vital. At this level, the compression may be either lossy (like JPEG) or lossless (like LZW). The second type of digital compression is external compression. Products like PKWARE's PKZIP or Aladdin's StuffIt Deluxe classify as external compression. External compression means that the compression occurs without knowing what is actually in the file. It simply looks for redundancies in the binary data of the file. Compression at this level is lossless. Lossy compression would destroy the readability of the file to its native application. The final type of compression is drive compression, which includes products such as Stacker and DoubleSpace. These products strive to store all data as compressed on a particular drive or disk. They then uncompress data as it is needed for use. All compression at this level is lossless because data lost at this level would undoubtedly cause system errors. |
Each digital video format is argumentatively different. Each gives different (but similar) visual results, and each gives different compression ratios. Which is the best? Well, it depends on who you talk to, but the most prevalent on the Web are QuickTime, AVI, and MPEG. Viewers for various platforms are available on many sites. Some will argue that there aren't players for one platform or another. But I hesitate to say that any one in particular dominates the Web. However, most of the latest versions of viewers for all three are at www-dsed.llnl.gov. The best way to access the site is via File Transfer Protocol (FTP). Then take a look in the pub/programs directory. You'll see that the programs are divided by platform, and there are resources for Macintosh, Windows, and UNIX boxes.
Video for Windows or .avi files were designed to distribute video in the Windows environment. The advantage to using Video for Windows files is the ease with which they can be distributed and used on Windows 3.1, 3.11, and Windows 95 machines. Microsoft's Media Player will automatically recognize video segments saved in VfW format. In addition, there are ways to either directly view or convert .avi files for use on other platforms.
The .avi format allows a couple of different codecs to compress the file, even though it is an "AVI file." People who view the file will not know that it is compressed with a particular codec. But if you're creating video and saving in the VfW format out of a package like Adobe Premiere, realize that the codec you choose can affect the viewed output.
The two primary codecs for AVI files are Cinepak and Indeo. In general, Cinepak is better at compressing animations and video that are predominantly composed of solid colors, while Indeo is better at recorded video with a large range of colors. Both are lossy and will drop certain amounts of detail, but they each function better at certain tasks. If you're using a product like Premiere to save animation or video, you can easily set the save routine to output in your chosen codec (see Figure 18.10).
Apple's QuickTime format, the first format available for desktop users, is very similar to Video for Windows. If I were to say any particular format was most used, it would probably be QuickTime. This is chiefly due to it being the oldest format of the three.
The real differences in file size and playback quality are difficult to distinguish for us general users, but I'm sure the technoweenies at either company could lay it out for us and tell us why AVI or QuickTime is better, but do we really care? Most of us are tired of the Mac versus PC scenario. Which should you choose? Are you a Mac user or PC user? Choose the one that fits your audience and your platform. Enough said. If you pick the right software, platform choice is unimportant.
QuickTime, like Video for Windows, allows you to choose your codec. Again, the most frequent are Cinepak and Indeo. Remember that Cinepak should be used for generally solid colors and Indeo for recorded video.
The newcomer to the heated battle is MPEG, a slick digital format that has proven to be a pretty good contender in the market. Currently, MPEG-based video machines are appearing everywhere. The biggest problem, as with all newer technologies, is that they are pretty pricey. Encoders for MPEG files, which are the boards that allow you to record and/or save digital files, range from $500 to $500,000, but for most MPEG files found on the Net, playback can be performed without the aid of additional hardware.
MPEG, unlike VfW or QuickTime, doesn't have options for assigning codecs when you're saving. You simply save in MPEG format. Many times when you run across MPEG files on the Net, you'll find that the audio and video portions may be separate files, but the frequency of this is decreasing as availability and price of hardware is decreasing.
As indicated earlier, animation files generally require more time to create than simply sampling video for desktop use. However, adding complicated audio effects or video transitions can push video creation time to a level equivalent with animation, par-ticularly if you are talking about broadcast-quality video. But with animation, there are two basic methods of creation: two-dimensionally and three-dimensionally based.
Traditional cel animation, a two-dimensional technique, requires the creation of specific key frames in the animation by drawing and painting each frame. The key frames are the main action points in the animation. The animator then hand generates the frames in between the keys, a process called in-betweening. For the most part, this is a two-dimensional process that requires drawing each frame either digitally or by hand. An example of these two-dimensional animation packages would be Autodesk's Animator or Macromedia Director.
The second method requires the creation of a three-dimensional model of the objects, characters, and background environment. Then surface textures and materials are pasted onto the objects in the model. The animator then sets up key frames by physically moving the objects in the three-dimensional database, while the computer creates the in-between frames based on the user's placement of those objects. The animator can then tell the computer to render out an animation file based on the database. Examples of these types of packages would include Autodesk's 3D Studio Max, Macromedia's Extreme 3D, Strata Studio Pro, and Crystal Graphics Topas.
Generally speaking, three-dimensional environments allow a greater degree of realism than two-dimensional techniques. That's not to say that you cannot achieve realism with two-dimensional techniques. However, achieving realism with two-dimensional techniques requires greater artistic ability and a greater amount of time.
Today there are some crossover and convergence of the two techniques. Sometimes it is advantageous to model portions of the background content so that an infinite amount of detail can be automatically generated. An example that sticks in my mind is from the Disney animation feature Beauty and the Beast. In the movie, there is a scene of the two characters dancing in a large ballroom where the background is extremely ornate and detailed. A three-dimensional model was used to generate a more photo-realistic effect without having to painstakingly create hand-generated artwork.
Programs that create digital video are quite frequent; however, most high-end, broadcast-quality packages are very expensive. This chapter focuses on the low end of digital video, that which can be delivered over the Net. The most powerful package for desktop video creation is undoubtedly Adobe Premiere.
Packages for digital video most often allow the user to import all types of files including static graphics, animation, raw video, and sound files. Most will also allow the user to sample video, audio, or both right in the package. Depending on the package, the user may also be able to create video and audio effects like complex transitions, chroma keying, or other video special effects. Be aware, nonetheless, that the price of the package will reflect the added features. You get what you pay for.
The process of creating digital video most often requires sampling the video and audio and then combining it in the package. You may also simply want to overlay sound to an animation you've created. Let's focus on designing a video file that will be created from an animation you have. Let's combine a little audio and save it as an Indeo AVI and an Indeo QuickTime to get a file size ratio. To create a digital video file for the Web
In the example shown, the final file size difference between VfW and QuickTime was about 1000 bytes; in other words not a whole lot. Both files were about 4KB. The playback of each was not significantly different. Again, the real decision of VfW versus QuickTime comes down to who is your audience and what platform you are developing on. The only significant decision you need to make is between Cinepak and Indeo codecs.
VRML (ver-mul), or Virtual Reality Modeling Language, is essentially a language design that allows users to create three-dimensional worlds on the Web (see Figure 18.18). It is actually a language that allows three-dimensional description and audio inclusion on the Web. It does, however, require a special browser to view these three-dimensional worlds.
Unlike HTML pages, VRML gives you the first-person advantage. Instead of funneling through pages or even controlling a character through a maze, you are in the maze. You get the feeling of actually being in the three-dimensional environment.
VRML technology is continuing to spread over the Web, and as hardware and bandwidth get faster and wider, you'll soon see that a Web page is not enough. Standards will dictate that your office, home, or operating environment be available via the Web.
Now that you know about delivering animations on the Web
What hardware and software do I need to be able to sample my own video? | |
To be able to capture video and record it to your hard drive, you need (a) a video capture board, (b) capture software and, (c) plenty of hard drive space. There are many different capture boards available on the market, and price is associated with quality. Most come with software such as a Limited Edition version of Adobe Premiere or proprietary software. Make sure you have plenty of hard drive space because with digital video the existing space will become a precious commodity. | |
How does a video capture board sample the video and audio? | |
Video capture boards use a set of chips called Analog to Digital Converters (ADCs). This chip receives input from your NTSC source (camcorder, VCR, or VTR) and digitally converts it. Most capture boards allow you to plug in S-VHS- and RCA-type plugs. If you decide to purchase a capture board, make sure the board accepts the type of cable you'll be trying to plug into it. | |
I am trying to use a server push scheme to animate a small logo on my page. It is working, but some of the colors in the images I'm using are doing some strange things. | |
Make sure all the images you intend to use in a server push scenario are palletized to the same palette. If various palettes are used for each image, such as an adaptive or optimized palette, you'll end up with palette flash when your sprite images play in the browser. |