Kodeco Forums

How to Play, Record, and Merge Videos in iOS and Swift

Learn the basics of working with videos on iOS with AV Foundation in this tutorial. You'll play, record, and even do some light video editing!


This is a companion discussion topic for the original entry at https://www.raywenderlich.com/1619-how-to-play-record-and-merge-videos-in-ios-and-swift

This code doesn’t work in xCode 7.3

Great worked, Please help me in a little issue, after merging 2 video files into single video file. The output video file does not any voice. I need merged single video file must have same old two video’s audio. any suggestion will be great.

I have been following this tutorial but have run into an error that I am unable to resolve. I am at the end of the Record and Save Video section. I am using Xcode 8.1 with Swift 3.

When building the code I get the following error:

Type ‘RecordVideoViewController’ has no member ‘video(_:didFinishSavingWithError:contextInfo:)’

This is in the RecordVideoViewController extension on the following line:

        if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(path) {
            UISaveVideoAtPathToSavedPhotosAlbum(path, self, #selector(RecordVideoViewController.video(_:didFinishSavingWithError:contextInfo:)), nil)
        }

Here is the video function:

func video(videoPath: NSString, didFinishSavingWithError error: NSError?, contextInfo info: AnyObject) {
    var title = "Success"
    var message = "Video was saved"
    if let _ = error {
        title = "Error"
        message = "Video failed to save"
    }
    let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
    alert.addAction(UIAlertAction(title: "OK", style: UIAlertActionStyle.cancel, handler: nil))
    present(alert, animated: true, completion: nil)
}

After lots of attempts to fix this and lots of googling I just can’t figure it out. Any ideas?

EDIT: By downloading and opening the final project I was able to determine that I just needed to add an underscore to the video function like so, and all is now well:

func video(_ videoPath: NSString, didFinishSavingWithError error: NSError?, contextInfo info: AnyObject) {

Awesome tutorial! But now with iOS 10 I’ve been able to get most everything working, except for the merging, which mentions that ALAssetsLibrary was deprecated and instruct to use PHPhotoLibrary instead. Just wondering if you guys have any recommendations / resources / tutorials for PHPhotoLibrary.

Thanks all!

Will

@wmbertrand merging Videos with iOS 10 is pretty strait forward. Following code snippet works for me as of Xcode 8.2.1 and iOS 10.2. I took the idea from this web site:

    let firstAsset = AVURLAsset(url: tempURL)
    let secondAsset = AVURLAsset(url: inputURL)
    
    let mixComposition = AVMutableComposition()
    
    do {
        try mixComposition.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration),
                                           of: firstAsset,
                                           at: kCMTimeZero)
    } catch _ {
        print("Failed to load first track")
    }
    
    do {
        try mixComposition.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration), of: secondAsset, at: firstAsset.duration)
    } catch _ {
        print("Failed to load second track")
    }

    guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    exporter.outputURL = mainVideoURL
    exporter.outputFileType = AVFileTypeQuickTimeMovie
    
    exporter.exportAsynchronously() {
        DispatchQueue.main.async { _ in
            print("export finished")
            completionHandler(self.mainVideoURL, nil)
        }
    }

This appends firstAsset to secondAsset and exports the result to mainVideoURL.
I should mention that the videos I merge using this code has no audio tracks.

Regards

Can you please upload Swift 3 version for this.

I think merging videos of this tutorial is not available for face camera. Of course I already tested it.

I’m able to merge back and front facing cameras (I’m using SwiftyCam for recording: GitHub - Awalz/SwiftyCam: A Snapchat Inspired iOS Camera Framework written in Swift)

Working on Xcode 8.2.1 Swift 3 +

func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
    var assetOrientation = UIImageOrientation.up
    var isPortrait = false
    if transform.a == 0 && transform.b == 1.0 && (transform.c == -1.0 || transform.c == 1.0) && transform.d == 0 {
        assetOrientation = .right
        isPortrait = true
    } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
        assetOrientation = .left
        isPortrait = true
    } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
        assetOrientation = .up
    } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
        assetOrientation = .down
    }
    return (assetOrientation, isPortrait)
}

func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
    
    let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
    let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
    
    let transform = assetTrack.preferredTransform
    let assetInfo = orientationFromTransform(transform: transform)
    
    var scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.width
    if assetInfo.isPortrait {
        scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height
        let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
        var portraitTransform = assetTrack.preferredTransform.concatenating(scaleFactor)
        let yTransform = assetTrack.preferredTransform.ty
        if yTransform < 0 {
            portraitTransform = portraitTransform.concatenating(CGAffineTransform.init(translationX: 0, y: -(yTransform/2)))
        }
        instruction.setTransform(portraitTransform,at: kCMTimeZero)
    } else {
        let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
        var concat = assetTrack.preferredTransform.concatenating(scaleFactor).concatenating(CGAffineTransform(translationX: 0, y: UIScreen.main.bounds.width/2))
        if assetInfo.orientation == .down {
            let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
            let windowBounds = UIScreen.main.bounds
            let yFix = assetTrack.naturalSize.height + windowBounds.height
            let centerFix = CGAffineTransform.init(translationX: assetTrack.naturalSize.width, y: yFix)
            concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
        }
        instruction.setTransform(concat, at: kCMTimeZero)
    }
    return instruction
}

Some quick changes to get the sample project building under the latest version of Xcode (as of 8.3.2):

  1. When you open the project, allow Xcode to convert the code to Swift 3

  2. In RecordVideoViewController.imagePickerController:

Change:
guard let path = (info[UIImagePickerControllerMediaURL] as! URL).path else { return }

To:
guard let path = (info[UIImagePickerControllerMediaURL] as? URL)?.path else { return }

  1. In Info.plist:

Add:
Key: Privacy - Camera Usage Description
Value: We use the camera to record the video.

Key: Privacy - Microphone Usage Description
Value: We use the microphone to record the video’s sound.

Key: Privacy - Photo Library Usage Description
Value: We save the final video in the Photo album.

It’s not working when merging the videos recorded both front and back cam. I have just fixed it.
let mixComposition = AVMutableComposition()
let track = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

    for index in 0..<videoClips.count {
        let avAsset = AVAsset(url: videoClips[index])
        
        do {
            try track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, avAsset.duration), of: avAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: index == 0 ? kCMTimeZero:mixComposition.duration)
        } catch _ {
            print("Failed to load track")
        }
    }
    
    guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
  
    let outputPath = mainVideoURL
    print(outputPath ?? "aaaa")
    exporter.outputURL = URL(fileURLWithPath: outputPath!)
    exporter.outputFileType = AVFileTypeQuickTimeMovie
    
    exporter.exportAsynchronously() {
        DispatchQueue.main.async { _ in
            print("export finished")
            self.exportDidFinish(exporter)
        }
    }

Would it be possible to post the whole page of code or private message it? I am having a hard time trying to fix this and would appreciate any help as I am still a newbie.

Thanks,
Michael

//
// CameraViewController.swift
// Sezzwho
//
// Created by Mobdev125 on 5/25/17.
// Copyright © 2017 Mobdev125. All rights reserved.
//

import UIKit
import SwiftyCam
import Material
import Photos
import QuartzCore
import AVFoundation
import CoreMedia

let maxVideoTime = 30.0

class CameraViewController: SwiftyCamViewController {

var flipCameraButton: UIButton!
var flashButton: UIButton!
var captureButton: SwiftyRecordButton!
var closeButton: UIButton!
var nextButton: UIButton!
var retryButton: UIButton!
var timeLabel: UILabel!

var videoClips = [URL]()
var finnalVideoUrl:URL?
var remainedTime = maxVideoTime

var recordingTimer: Timer!

override func viewDidLoad() {
    super.viewDidLoad()
    cameraDelegate = self
    maximumVideoDuration = remainedTime
    shouldUseDeviceOrientation = true
    videoQuality = .high
    addButtons()
    setTimeLabel()
    self.navigationController?.navigationBar.isHidden = true
}

override var prefersStatusBarHidden: Bool {
    return true
}

override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    if remainedTime == 0 {
        captureButton.isEnabled = false
    }
    else {
        captureButton.isEnabled = true
    }
}

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    if segue.identifier == "gotoVideoVC" {
        let videoVC = segue.destination as! VideoViewController
        videoVC.videoURL = sender as? URL
    }
}
private func addButtons() {
    let bottomView = UIView(frame: CGRect(x: 0, y: view.frame.height - 125.0, width: view.frame.width, height: 125))
    bottomView.backgroundColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.4)
    self.view.addSubview(bottomView)
    
    captureButton = SwiftyRecordButton(frame: CGRect(x: view.frame.midX - 37.5, y: view.frame.height - 100.0, width: 75.0, height: 75.0))
    self.view.addSubview(captureButton)
    captureButton.delegate = self
    
    flipCameraButton = UIButton(frame: CGRect(x: (((view.frame.width / 2 - 37.5) / 2) - 15.0), y: view.frame.height - 74.0, width: 30.0, height: 23.0))
    flipCameraButton.setImage(#imageLiteral(resourceName: "CameraSwitch"), for: UIControlState())
    flipCameraButton.addTarget(self, action: #selector(cameraSwitchAction(_:)), for: .touchUpInside)
    self.view.addSubview(flipCameraButton)
    
    let test = CGFloat((view.frame.width - (view.frame.width / 2 + 37.5)) + ((view.frame.width / 2) - 37.5) - 9.0)
    
    flashButton = UIButton(frame: CGRect(x: test, y: view.frame.height - 77.5, width: 18.0, height: 30.0))
    flashButton.setImage(#imageLiteral(resourceName: "flashOutline"), for: UIControlState())
    flashButton.addTarget(self, action: #selector(toggleFlashAction(_:)), for: .touchUpInside)
    self.view.addSubview(flashButton)
    
    let topView = UIView(frame: CGRect(x: 0, y: 0, width: view.frame.width, height: 70))
    topView.backgroundColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.4)
    self.view.addSubview(topView)
    
    closeButton = UIButton(frame: CGRect(x: 20, y: 20, width: 30, height: 30))
    closeButton.setImage(Icon.close, for: UIControlState())
    closeButton.tintColor = UIColor.white
    closeButton.addTarget(self, action: #selector(closeAction(_:)), for: .touchUpInside)
    self.view.addSubview(closeButton)
    
    nextButton = UIButton(frame: CGRect(x: view.frame.width - 50, y: 20, width: 30, height: 30))
    nextButton.setImage(#imageLiteral(resourceName: "ic_arrow_next_white"), for: UIControlState())
    nextButton.addTarget(self, action: #selector(nextAction(_:)), for: .touchUpInside)
    self.view.addSubview(nextButton)
    
    timeLabel = UILabel(frame: CGRect(x: 60, y: 20, width: view.frame.width - 120, height: 30))
    timeLabel.textAlignment = .center
    timeLabel.textColor = .white
    timeLabel.text = "00:60"
    self.view.addSubview(timeLabel)
    
    retryButton = UIButton(frame: CGRect(x: view.frame.midX - 30, y: 80, width: 60, height: 30))
    retryButton.backgroundColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.4)
    retryButton.cornerRadius = 4
    retryButton.setTitle("Retry!", for: UIControlState())
    retryButton.titleLabel?.textColor = .white
    retryButton.tintColor = .white
    retryButton.addTarget(self, action: #selector(retryAction(_:)), for: .touchUpInside)
    self.view.addSubview(retryButton)
}

}

// Actions
extension CameraViewController {
@objc fileprivate func cameraSwitchAction(_ sender: Any) {
switchCamera()
}

@objc fileprivate func toggleFlashAction(_ sender: Any) {
    flashEnabled = !flashEnabled
    
    if flashEnabled == true {
        flashButton.setImage(#imageLiteral(resourceName: "flash"), for: UIControlState())
    } else {
        flashButton.setImage(#imageLiteral(resourceName: "flashOutline"), for: UIControlState())
    }
}

@objc fileprivate func closeAction(_ sender: Any) {
    self.dismiss(animated: true, completion: nil)
}

@objc fileprivate func nextAction(_ sender: Any) {
    mergeVideos()
}

@objc fileprivate func retryAction(_ sender: Any) {
    for url in videoClips {
        try! FileManager.default.removeItem(at: url)
    }
    if let url = finnalVideoUrl {
        try! FileManager.default.removeItem(at: url)
    }
    videoClips.removeAll()
    remainedTime = maxVideoTime
    setTimeLabel()
}

@objc fileprivate func countDownTime() {
    remainedTime = remainedTime - 1
    if remainedTime < 0 {
        recordingTimer.invalidate()
        return
    }
    
    setTimeLabel()
}

func setTimeLabel() {
    if remainedTime < 10 {
        timeLabel.text = "00:0\(Int(remainedTime))"
    }
    else {
        timeLabel.text = "00:\(Int(remainedTime))"
    }
}

func exportDidFinish(_ session: AVAssetExportSession) {
    if session.status == AVAssetExportSessionStatus.completed {
        guard let outputURL = session.outputURL else { return }
        PHPhotoLibrary.shared().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL)
        }) { completed, error in
            if completed {
                print("Video is saved!")
                self.finnalVideoUrl = outputURL
                self.performSegue(withIdentifier: "gotoVideoVC", sender: outputURL)
            }
        }
    }
}

func mergeVideos() {
    if videoClips.count == 0 {
        return
    }
    else if videoClips.count == 1 {
        self.performSegue(withIdentifier: "gotoVideoVC", sender: videoClips[0])
        return
    }
    
    // merge
    let mixComposition = AVMutableComposition()
    let track = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
    
    for index in 0..<videoClips.count {
        let avAsset = AVAsset(url: videoClips[index])
        do {
            try track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, avAsset.duration), of: avAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: index == 0 ? kCMTimeZero:mixComposition.duration)
        } catch _ {
            print("Failed to load track")
        }
    }
    track.preferredTransform = CGAffineTransform(rotationAngle: .pi/2)
    
    guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    let dateFormatter = DateFormatter()
    dateFormatter.dateStyle = .long
    dateFormatter.timeStyle = .short
    let date = dateFormatter.string(from: Date())
    let outputPath = FileUtils.getSaveFilePath()?.appending("/mergeVideo-\(date).mov")
    exporter.outputURL = URL(fileURLWithPath: outputPath!)
    exporter.outputFileType = AVFileTypeQuickTimeMovie
    
    exporter.exportAsynchronously() {
        DispatchQueue.main.async { _ in
            print("export finished")
            self.exportDidFinish(exporter)
        }
    }
}

}

extension CameraViewController: SwiftyCamViewControllerDelegate {
func swiftyCam(_ swiftyCam: SwiftyCamViewController, didTake photo: UIImage) {

}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didBeginRecordingVideo camera: SwiftyCamViewController.CameraSelection) {
    print("Did Begin Recording")
    recordingTimer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(countDownTime), userInfo: nil, repeats: true)
    
    captureButton.growButton()
    UIView.animate(withDuration: 0.25, animations: {
        self.flashButton.alpha = 0.0
        self.flipCameraButton.alpha = 0.0
    })
}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didFinishRecordingVideo camera: SwiftyCamViewController.CameraSelection) {
    recordingTimer.invalidate()
    maximumVideoDuration = remainedTime
    print("Did finish Recording")
    captureButton.shrinkButton()
    UIView.animate(withDuration: 0.25, animations: {
        self.flashButton.alpha = 1.0
        self.flipCameraButton.alpha = 1.0
    })
}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didFinishProcessVideoAt url: URL) {
    videoClips.append(url)
    if remainedTime <= 0 {
        nextAction(self)
    }
}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didFocusAtPoint point: CGPoint) {
    let focusView = UIImageView(image: #imageLiteral(resourceName: "focus"))
    focusView.center = point
    focusView.alpha = 0.0
    view.addSubview(focusView)
    
    UIView.animate(withDuration: 0.25, delay: 0.0, options: .curveEaseInOut, animations: {
        focusView.alpha = 1.0
        focusView.transform = CGAffineTransform(scaleX: 1.25, y: 1.25)
    }, completion: { (success) in
        UIView.animate(withDuration: 0.15, delay: 0.5, options: .curveEaseInOut, animations: {
            focusView.alpha = 0.0
            focusView.transform = CGAffineTransform(translationX: 0.6, y: 0.6)
        }, completion: { (success) in
            focusView.removeFromSuperview()
        })
    })
}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didChangeZoomLevel zoom: CGFloat) {
    print(zoom)
}

func swiftyCam(_ swiftyCam: SwiftyCamViewController, didSwitchCameras camera: SwiftyCamViewController.CameraSelection) {
    print(camera)
}

}

I used “SwiftyCam”. Above is full source code. It records the video with front and rear and merges the videos. I hope this helps you.
Kind Regards.
Martin.

Thanks so much Martin, I really appreciate it…I’ll give this a shot later tonight.

Thanks again!
Michael

Hey marco, I was wondering if you had a problem with the export from video recorded with the front facing camera. I am capable of getting the back facing camera recording to scale correctly, but no matter what I do, the front facing recorded video is not getting adjusted, regardless of the scaleToFitRatio.

Much appreciated and hope to hear back!
-Elliott

Dear Andy,

I tried this tutorial with swift 3. But at the last step Video Orientation, the result merged video become black. I still heard the merged sound.

I tried both steps by steps in the tutorial or convert the final code to Swift 3 but got the same problem.

Hey, I’ll have to see if there’s some changes to the API for this. It is a bit outdated considering the changes to swift, so I’ll see if I can find the issue.

Hi Marco,
I have a project that is written in objective C and I have problems then trying to merge landscape video from front camera with portrait with back camera and portrait with front camera with portrait with back camera.
This is my code

       CGAffineTransform orientationTransform = videoAssetTrack.preferredTransform;
        CGSize naturalSize = CGSizeApplyAffineTransform(videoAssetTrack.naturalSize, orientationTransform);
        naturalSize.width = fabs(naturalSize.width);
        naturalSize.height = fabs(naturalSize.height);

        // Make sure the video is transformed properly - Apply Aspect Fill
        float scale = naturalSize.width < naturalSize.height ? resultSize.width / naturalSize.width : resultSize.height / naturalSize.height;

        CGPoint recenter;
        recenter.x = (resultSize.width - naturalSize.width * scale) * 0.5;
        recenter.y = (resultSize.height - naturalSize.height * scale) * 0.5;

        // If orientationTransform rotated the video in a way that changed the orientation, switch recenter
        if ((videoAssetTrack.naturalSize.width > videoAssetTrack.naturalSize.height) != (naturalSize.width > naturalSize.height)) {
            typeof(recenter.x) temp = recenter.x;
            recenter.x = recenter.y;
            recenter.y = temp;
        }    
        CGAffineTransform transform = CGAffineTransformConcat(CGAffineTransformConcat(CGAffineTransformMakeScale(scale, scale),  CGAffineTransformMakeTranslation(recenter.x, recenter.y)), orientationTransform);

        [layerInstruction setTransform:transform atTime:timeOffset];

Then I tried transforming your code in Objective C and this is the result:

           CGAffineTransform orientationTransform = videoAssetTrack.preferredTransform;
            ALOrientation  assetTransform = [self orientationFromTransform:orientationTransform];
            float scaleToFitRatio = resultSize.width / videoAssetTrack.naturalSize.width;
            
            if (assetTransform.isPortrait == YES) {
                scaleToFitRatio = resultSize.width / videoAssetTrack.naturalSize.height;
                CGAffineTransform scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio);
                CGAffineTransform portraitTransform = CGAffineTransformConcat(orientationTransform, scaleFactor);
                CGFloat yTransform = videoAssetTrack.preferredTransform.ty;
                if (yTransform < 0 ) {
                    portraitTransform = CGAffineTransformConcat(portraitTransform, CGAffineTransformMakeTranslation(0, -yTransform/2));
                }
                [layerInstruction setTransform:portraitTransform atTime:timeOffset];
                
            } else {
                CGAffineTransform scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio);
                CGAffineTransform concat = CGAffineTransformConcat(CGAffineTransformConcat(orientationTransform, scaleFactor), CGAffineTransformMakeTranslation(0, resultSize.width/2));
                if (assetTransform.orientation == UIImageOrientationDown) {
                    CGAffineTransform fixUpsideDown = CGAffineTransformMakeRotation(M_PI);
                    CGFloat yFix = videoAssetTrack.naturalSize.height + resultSize.height;
                    CGAffineTransform centerFix = CGAffineTransformMakeTranslation(videoAssetTrack.naturalSize.width, yFix);
                    concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor);
                }
                [layerInstruction setTransform:concat atTime:timeOffset];
            }

Can you please take a look over and tell me if I mistranslated any part of swift code?

Thanks,
MihaiRb

This tutorial is more than six months old, so questions regarding it are no longer supported for the moment. We will update it as soon as possible. Thank you! :]