Everything A Photographer Should Know About Videography
![]() |
Light trail from car tail lights. |
As the modern photographer that you probably are, I bet you've experienced the following scenario.
You have a business/side hustle/hobby doing photography and a client/freeloader that you're working for. Somewhere between pre-shoot arrangements and the shoot itself the client comes up to you with a big smile and says "Hey, that's a nice camera! Could you take a few clips too while you're at it?".
Oh brother. Okay, fine, every modern camera is good at video, you might as well. You know how to frame, compose, design, edit. You've binged TV and maybe you're old enough to have gone to the cinema on a regular basis. How hard could it possibly be?
On an artistic level, transitioning to video isn't that hard! I would even go so far as to call video easier than photography, as you don't have to worry as much about background separation (the subject moves!) or implying motion (the picture moves!).
On a technical level, there are a lot of concepts you should grasp before you start filming paid work or good portfolio content (or Instagram reels, which are the only type of content Instagram pushes to new audiences). Our concerns are smooth motion, consistency throughout a shot, limitations in color, limitations in data rate and good delivery options. Let's get into it.
Story time! A couple weeks ago I was taking street shots when I saw a protest going by. Like a shark smelling chum, I followed them to take pictures. I love photographing protests, because they are organic, significant events and the subjects either expect or even desire to be photographed and published. Partway through me taking photos, one of the protest organizers approached me and asked me to share the photos with him, as well as shoot a bit of video. I was already shooting video, so that was okay. He then rushed me on the delivery and wanted edits!
Frame Rate
My background is in software engineering, so please trust me when I say that the choice between 24 and 30 frames per second is technical, not creative. Because this is important for me to convey effectively, let me repeat this loudly and clearly.
The choice between 30 and 24 frames per second is technical, not creative.
The most crucial aspect of videography is an even, smooth playback rate, no matter the frame rate of the recorded video. All the frames in the video file must be shown on screen for the exact same amount of time, therefore there has to be synchronization between the video frame rate and the display's refresh rate.
This doesn't have to be exact — as the only requirement is even frame display timings, the display refresh rate needs to be an even multiple of the video frame rate to achieve smooth playback.
Displays that refresh 60 times per second are still the standard. Apple displays without ProMotion are overwhelmingly 60Hz displays. So are the vast majority of Windows laptops, price-competitive televisions/monitors and most old Android phones.
To achieve smooth playback of your video, you must produce it at an integer divisor of the displays that will show it. If you are producing content that will be consumed on the web, the vast majority of displays that will show your content will be 60Hz displays. When you film at 30fps, the resulting video will be displayed on a 60Hz monitor simply by holding on each frame twice. (Note: 30fps content plays back exactly the same on 30Hz and 60Hz screens in all but very few edge cases.)
24fps footage does not play back smoothly on 60Hz screens. A screen has to display something on every single refresh cycle, so 24fps footage is shown as two identical frames, then three identical frames, and so on and so forth. This creates an even, steady 12Hz stutter.
Therefore, when filming web content, 30fps is the correct choice. Let me clear my throat and go over that again.
When filming web content, 30fps is the correct choice.
What about TV stations broadcasting 24fps film in 60Hz TV countries? They've been running motion smoothing since TV was invented. What about BluRay and DVD? The player reads the frame rate off the disk and tells your screen to clock down to an even multiple of the frame rate. This mode switch takes a second and devices don't do it for web browsing (you can see how this works by going to a computer's display settings and changing the refresh rate to a lower value).
Pro tip: when you drag your main footage into your editing program after the shoot, the program might pop up a message asking you to change the project's frame rate to the frame rate of the files you just imported. Always say yes! It ensures the video is processed and exported at the same frame rate as the source footage. Keeping your frame rate consistent at every stage from shooting to uploading is the most important takeaway here, even if you use a sub-optimal frame rate of 24 for web delivery.
Shutter Speed (a.k.a. Shutter Angle)
Most of the motion information in a 24 or 30fps video file is stored within each frame. Think of a very low shutter speed photo, such as a light trail picture taken at night. Between the start and end of each light trail is motion information, as the sensor captured the light's movement in the duration that it was exposed.
| Jay Bauman turning his head in a Red Letter Media video, rendering it blurry. |
Videographers take advantage of how image capture works in cameras to encode motion information in every frame of video. Go to any professionally shot video, preferably one with motion happening in a plane parallel to the camera sensor. Quick, press pause! Do you see how the elements of the image which are in motion are blurry, while the more static elements are sharp? This is because the motion took place while the frame was being exposed.
This motion blur effect is not motion smoothing! Motion blur is natural and desired, as the information contained within the blur is real information captured by a camera in the real world. It is not noticeable during playback unless the motion is fast across the frame. Motion smoothing is a post-process or playback effect whereby the software attempts to reconstruct visual information from blur. It's like when your photography clients put a filter on your JPEG before posting it on Instagram.
Professional videographers use a shutter speed of 1/(2 * frame rate), so 1/60 for 30fps footage and 1/48 for 24. This produces motion exactly the way you might see it on TV or at the cinema. If you don't feel comfortable shooting video fully manual, or the environment is too dynamic for manual control, shoot in shutter speed priority mode. This ensures your motion looks perfect for an entire video file.
There is a bit of flexibility here, but don't stray too far! If your shutter speed is too slow, you'll lose definition in the things you're filming. Too fast and you'll lose the smooth motion effect!
Exposure
Wait a minute - 1/60?! 1/48?!?! But that's so bright!
Indeed it is! When filming an outdoor event during the daytime, ISO 100 and even f/22 will result in an overexposed image at the recommended shutter speeds. This is why videographers often use Neutral Density (ND) filters, which simply reduce the amount of light going into your sensor ("Sunglasses For Your Camera"™). It's such an important part of the exposure process for videographers, that it effectively occupies the corner held by shutter speed in the exposure triangle!
A Variable ND filter is more flexible during a shoot. A fixed neutral density filter has better color performance. A matte box (the box in front of camera lenses at high end shoots like feature films) is the best of both worlds, allowing the user to easily swap fixed neutral density filters. Some new, expensive video cameras (like the Sony FX6) come with built-in, electronically controlled ND filters!
Exposure Stability
As photographers, we only care about exposure on a per-image basis — we can just toss photos that came out way under-exposed or way over-exposed. That same luxury is not always afforded to videographers!
If you set your camera to film in automatic exposure while turning a light on and off, you'll notice that every time you turn the light off, your camera increases exposure slowly, and the opposite happens when you turn on the light. Imagine trying to film a concert or a wedding reception!
In photography, you expose for your subject and hope that dynamic range takes care of everything else. In videography, you expose for your subject on average and hope that dynamic range takes care of everything else.
If you find that your camera is changing exposure settings too much during a shot, you can start trying out full manual exposure, or you can use the Automatic Exposure Lock feature on your camera. On my Sony a6300, it's marked as AEL. For some reason the button is set to hold by default, so dig around the button customization settings and set it to toggle. Frame your shot, toggle AEL on, film your shot, toggle AEL off. Locked exposure, no manual faffing!
Focus
Similar to exposure, a photographer only cares about focus on a per-image basis. It only matters whether the sliver of a moment represented in a hero shot came out in focus. It's best practice to delete the frames that came out unacceptably blurry. Videographers don't get to do that!
As we need our camera to have stable exposure throughout a shot, we also need focus behavior to be consistent throughout a shot. If a camera hunts for focus often during a shot, it can ruin our footage!
Unfortunately, there is no equivalent to AEL for focus. Some may advise you to make your focus more or less sticky in your camera settings; this just increases the chances that your camera will get more stuck focusing on irrelevant elements in the frame or get less stuck on your subject. If you're having problems with focus throughout a shot, here's some ideas:
- Increase f-stop. A wider depth of field allows your camera to pull and hold focus more easily and missed focus is less dramatic when it does happen. I would recommend f/5.6 or more for shooting alone on a camera monitor.
- Lower resolution. If your client doesn't want 4K, missed focus is a lot harder to spot at 1080p.
- Center focus. This, combined with plain Jane composition with your subject dead in the middle of the frame, should be a clear message from you to your camera about what you want to have in focus.
In professional video shoots, manual focus is still king. Videographers build rigs with large monitors to nail the focus while shooting. In high end shoots and cinema sets, there is a role known as First Assistant Camera whose entire job is to pull focus!
The ease with which you can pull manual focus is what typically separates video and photo lenses. Absurdly expensive cinema-grade lenses are both optically perfect and they don't have focus breathing. Focus breathing is that little zooming in/out behavior you get when focusing that plagues even prime lenses. If you never noticed your lens doing that before, I'm terribly sorry!
Remember, you are shooting at 1/48 or 1/60, so if your subject moves around a lot, all your camera will see is blur! Don't get angry that it can't focus — in videography, blur is often all your camera can see!
Stability
If your shutter speed is too slow, there will be too much blur in every frame of your video. With your shutter speed at an appropriate setting, there is still a gap in visual information between your frames. If a visual element or your entire frame moves too fast, you will have elements disappearing from one position in the frame and appearing in another instantaneously! Even if you keep your subject steady as you move around them, going too fast will make them consistently blurry in the video (think spinning a coin on a table versus walking around it in a circle)!
In videography, slow and steady is the name of the game. Of the following, I recommend at least one:
- Optical Image Stabilization (in-lens)
- In Body Image Stabilization
- Tripod
- Gimbal
Both OIS and IBIS can be found more often in more recent lenses and bodies respectively. Having them both on simultaneously is beneficial, because the plane of stabilization differs between your lens and sensor when your hands shake in every possible axis in 3D space! If you whip your camera to rotate it, the stabilizer in your lens moves faster than your floating sensor.
A tripod is good when you Just Need Some Footage of an event, but a locked-in shot is not visually interesting. Think of how much mileage the creators of The Office got from that small set — could you binge watch a 90s sitcom the same way?
A gimbal is expensive and it's priced in tiers depending on the weight of your camera plus lens and accessories, taking into account the center of gravity of your whole stabilized rig, which you might have to offset with a plate... Definitely try a gimbal out on your own time before bringing it to a client shoot.
In general, keep your camera movements at a reasonable speed. Think of it this way: a very bright screen displaying a video that changes a lot from frame to frame is effectively a strobe light!
Color Spaces (a.k.a. Reference Gamuts)
When your computer sends RGB color information to your display, such as "red", it sends something like "FF 00 00". But what kind of red? What kind of red would "99 00 00" be? Have you ever seen a Pantone book? There's a lot of colors out there!
Reference gamuts are what match computer data to actual colors in the real world.
Why am I telling you all this? Because contemporary digital photography is in the sRGB gamut, while standard dynamic range video is in the Rec. 709 gamut. Nearly all online video is in Rec. 709, including streaming services. It is simply the default.
When shooting video, your camera should be in a gamut that allows you to produce video in Rec. 709. If you're just starting out, it's a good idea to not mess with color space conversions and simply film in Rec. 709. When you don't dig around in color space settings, your editing program will assume all your incoming footage is in Rec. 709 and it will produce a video that's in Rec. 709. Doing everything in Rec. 709 from start to finish is the best way to ensure consistent color and exposure from your camera to the final video.
A note on camera behavior
Unfortunately, this can get pretty complicated, especially on Sony cameras. Sony calls color spaces "Picture Profiles". Typically, this should be off while shooting photos (Picture Profile off means you're using sRGB or AdobeRGB — it's in the camera settings somewhere). These picture profiles are denoted PP1 through PP9, alongside "PP Off", on my a6300. I don't recall what the default settings are, but I customized PP1 by clicking right on the wheel and setting Gamma to "ITU709" and Color Mode to "ITU709 Matrix".
Here's the kicker: old Sony cameras don't switch out the Picture Profile setting when the user switches from Stills to Video and vice-versa. If you switch from picture to video, you have to switch to your Rec. 709 Picture Profile manually! When going from video to picture, you have to switch to "PP Off" manually! Gah!
This behavior is adjustable on newer Sony cameras so that picture profiles can be separate for stills and video — no such luck on my a6300. Memory recall is an option, but I still have to check that my camera is running the appropriate Picture Profile. This has become an endless source of frustration.
I would highly recommend you learn about your camera's color space and picture profile settings. Video color is very limited, especially compared to RAW photos. Getting color wrong in video is much more of a concern than in photos, because in video you have a lot less leeway in post-production.
RAW and Flat: Use Caution!
RAW photos are the best. The amount of data stored in them might be invisible to the naked eye, but photo editing software uses all this extra information to make very significant changes without introducing unwanted visual artifacts.
In video, RAW (see: ProRes, Nikon/RED Raw) and flat/logarithmic color (see: SLog, CLog, NLog, FLog) formats are very advanced tools that require practice both when filming and when doing color work in post.
A RAW photo can come out looking magnificent without any post-processing. This is because the camera is doing work in the background when you shoot RAWs even when you shoot full manual, adjusting different variables to get good reference color and good dynamic range.
You don't know the camera is doing this until you shoot flat color video and discover that the camera doesn't do any of this work when shooting flat color video. Adjusting dynamic range, saturation and a whole host of things is left up to the colorist, or whoever picks up their mantle in post.
Because we are in video, these adjustments have to be made across the time dimension of a shot. In a photo, you can mask something out and make adjustments to it. In video, if that something moves, your mask has to move with it. If the lighting conditions change during a shot, your adjustments have to change to compensate!
Even some of the best RAW and flat footage is nowhere near as flexible as RAW shots from very old cameras. It is very easy to mess up filming RAW and end up with unrecoverable footage! I have 2 years of experience shooting video, but Rec. 709 is still my go-to* when I'm filming something I have no creative control over (think concerts, parties).
* In Sony cameras, the Cine4 picture profile is essentially Rec. 709 with a more pleasing, cinematic look straight out of the camera.
If you want to learn how to shoot and grade RAW and flat color, take a long while to watch some tutorials. Don't skip the workflow tutorials — color work happens at the latter stage of an edit, where a shot can be split into multiple clips. Learning color space management is invaluable. Shoot clips in real life, and practice grading them after.
Don't mess with LUTs! A lot of people try to sell them because they're easy ways to describe static color adjustments in a file supported by lots of devices and software, but if you don't want to do the work of color adjustments in post, your camera does a much better job adjusting colors dynamically in the moment. Color management in editing software is how you switch color spaces, never LUTs.
Compression
There are a few compression-related choices you'll have to make when producing video. As a rule of thumb, you should view video as always being very highly compressed. In photography, a 2 megabyte JPG and a 100 megabyte PNG will look the same when crushed under the heel of Instagram's heavy compression.
In video, you might have, for the sake of argument, my camera's maximum 100 megabit per second bitrate at 24 frames per second in 4K, or an average of 4.17 megabits a frame. But this is megabits — in megabytes, that's just over 0.5 megabytes per frame. It's crazy small!
Note two factors:
- You have much, much less data to work with compared to shooting stills. Any compression will have a significant effect on your image quality.
- Your video is going to go through at least 3 rounds of compression before going live on the internet.
And now, for the rounds of compression.
First, your camera takes the signal from your sensor, runs it through the normal digital photographic process, then it compresses the footage at a quality level and bitrate that it can both process with its processor and transfer to the memory card respectively. A memory card's speed can limit the quality your camera can record, and a fast card makes photo ingestion and buffer clearing faster too!
Second, your editing program runs compression when you export your video. This is because compressed video, such as the file your camera produced, isn't really... video. The data in a compressed video file is meaningless in its compressed format; a decoding stage has to be applied to output the actual visual information that your editing program can use and that your computer can transmit to your display. In effect, if you insert a video into your editing program and export it without making any modifications whatsoever, the editing program will re-compress your video, even if the new video has a higher bitrate!
Finally, the delivery medium will run its own round of compression. YouTube, Instagram and most other platforms will run a staggering amount of compression on your video. You should mentally prepare for this and understand that it's normal and can't be circumvented. Modern web compression algorithms are quite good. They produce almost no artifacts and achieve a very clean reduction in detail that will drive you mad when you directly compare the low-quality video that's available online and the sharp video that your editing program produced.
The key here is understanding that every round of re-compression runs a full round of compression on the input video, as though the input video wasn't compressed at all. To ensure minimum quality loss, you must produce the highest quality video you can produce at every stage of the process. Record at the highest quality, export at the highest quality, pray to your highest quality deity that YouTube won't step on your video too hard.
Delivery
With online video, my recommendation is to listen to the platform when it comes to the format of your final deliverable, but ignore their suggested bitrate*. The format ensures maximum compatibility with their compression algorithms, but the bitrate doesn't matter*, as they'll re-compress anyway. Of course, ask your client what platforms they'll upload to, and provide them with platform-specific deliverables.
* Disregard this advice for live streaming! Obey those bitrate limits, because they are actual limits.
It is important that you do not run any more compression on your video after rendering, as it will severely degrade the quality of the video! Use your editing program's export settings to get a video that fits your storage and can be uploaded from your connection.
I won't include any specific formats or numbers in this post — my advice is that you look them up yourself, preferably sourcing the platform you're uploading your video to. Here's a funny tidbit I found a while ago: Netflix upload recommendations are public! Seriously, listen to the platform. You might find sources claiming that a more recent video codec than what your platform asks for is "better", but just because a more modern video codec looks better sitting on your computer doesn't mean it's better for the compression algorithms that your host platform runs when you upload the video.
If your client wants to handle upload, send them the files through a platform that won't re-compress your final deliverables. Google Drive doesn't re-compress, Google Photos does. Messaging platforms may re-compress when you upload a file that they recognize as a video.
Recall the bit about editing programs re-compressing video! If your client wants to crop, add logos or make any modification by themselves, they will reduce the quality of the video significantly. Talk to your client so you can produce all possible versions of the video that they might need from your end.
Pro tip: web video compression really doesn't handle grain well. I adore grain, and I've produced many stills where I cranked the grain way up through Lightroom. In video, I almost never add grain in post, because the compression algorithms that work on online content are ruthless. If you add light grain to something you'll upload to YouTube, the grain won't be visible — you'll just get a video with worse looking compression than if you hadn't added the grain.
—
Well, that was a lot! Sorry for the technically dense post — I promise I'll try to make my next post about more woo-woo creative subjects like light, gesture and color. 😉
In the meantime, the best way to get in touch with me is through my Instagram!
I'll leave you with one more, less technical tip. Video editing programs (a.k.a. non-linear editors) are made for professionals who need time-optimal workflows in the programs they use on a regular basis for work. They are not optimized for newcomers to stumble around. Watching tutorials is invaluable and learning from professional editors will teach you the optimal way of doing things. You could create a video by painting on film reels. Any time savings from that level upwards are a workflow optimization concern.
Got any more tips for photographers who are new to video? The comment section is down below!

Comments
Post a Comment
1. Be nice!
2. Keep in mind the effort gap between typing up something long-winded and rude, and a moderator removing it.