I'm far from an FFmpeg expert, but I believe it's possible to segment the input video, transcode the segments one by one, and then concatenate them. Not sure how the segmentation and concatenation steps perform, but if that's fast, this might even improve your overall transcoding speed due to the parallelization.
Media companies are already taking this approach using ffmpeg, AWS Lambda, and AWS Step Functions. I heard from two companies using such approaches at AWS re:Invent in October 2017, so it's definitely possible.
Rolling your own approach like this is certainly more complex to build/maintain than using Elastic Transcoder though.
If you know that you'll need more than 8 minutes, why wouldn't you just run ffmpeg on EC2? EC2 is now pay-per-second. I haven't looked at the prices recently, is AWS lambda so much cheaper that it's worth jumping through all these extra hoops?
Also not an expert, but since videos are transcoded as key-frames and changes applied to those key frames, I don't think it's as simple as segmenting something like a CSV. Transcoding is probably required just for the segmentation process. Putting it back together might be easier, but the final output file might also be larger because of overhead.
The segmentation code is keyframe-aware, so it only splits along keyframe edges. In other words: requesting segments of 30 seconds each probably won't get you segments that are exactly 30 seconds long. Still, there could be plenty of other obstacles I'm not aware of.
Neither am I. It's pretty simple to do though, and the performance of the steps that aren't encoding are a lot quicker as it's mainly just copying the encoded files to an intermediate format, and then concatenating those together.