Skip to content

Instantly share code, notes, and snippets.

@rlan
Last active January 10, 2026 14:51
Show Gist options
  • Select an option

  • Save rlan/ab255879c2304a25928ddb190de6b552 to your computer and use it in GitHub Desktop.

Select an option

Save rlan/ab255879c2304a25928ddb190de6b552 to your computer and use it in GitHub Desktop.
Convert video to jpg and back using ffmpeg

Video and JPG conversion using FFMPEG

Grab one frame from video

#!/bin/sh -x
# $1 - input video file, e.g. video.mp4
# $2 - timestamp, e.g. 00:33
# $3 - output image file, e.g. output.jpg
ffmpeg -ss $2 -i $1 -vframes 1 -q:v 2 $3

Grab all frames from video

#!/bin/sh -x
# $1 - input file, e.g. input.mp4
ffmpeg -i "$1" -q:v 2 %06d.jpg

Combine jpg files into one video

ffmpeg -f image2 -framerate 30 -i %06d.jpg -c:v libx264 out.mp4
@rlan
Copy link
Author

rlan commented Sep 13, 2023

@ankurbhatia24 I see you want to understand the cause of the difference. I don't know the answer. In fact more questions come up in my head. Similar to you, I would start with color space of video vs images. Also I would have a rigorous way to measure luma difference, eg bits in files or some luma intensity definition. If you end up doing the analysis, I would be interested in the findings.

@eyeandroid
Copy link

Thanks for the scripts!

I built a web-based tool (Video To JPG) that also uses FFmpeg, but I added a sharpness detection feature on top of it.

Instead of manually sifting through hundreds of extracted frames to find a focused shot, the tool automatically calculates a clarity score and highlights the crispest images. It’s really helpful for filtering out motion blur. Hope it helps someone looking for the "best" frame rather than just "all" frames.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment