我正在尝试在Xamarin/Monotouch中做一些基本的视频合成,并取得了一些成功,但我被卡住了,这似乎是一个相当简单的任务。
我从相机中纵向录制视频,所以我使用AVAssetExportSession旋转视频。我创建了一个图层指令来旋转视频,效果很好。我能够以正确的方向成功导出视频。
问题:
当我将音轨添加到导出中时,我总是会收到带有以下错误的失败响应:
Domain=AVFoundationErrorDomain Code=-11841"操作停止"UserInfo=0x1912c320{NSLocalized描述=操作停止,NSLocalizedFailureReason=视频无法合成。}
如果我没有在exportSession上设置video oComence属性,则音频和视频导出完全可以,只是方向错误。如果有人能给我一些建议,那将不胜感激。下面是我的代码:
var composition = new AVMutableComposition();
var compositionTrackAudio = composition.AddMutableTrack(AVMediaType.Audio, 0);
var compositionTrackVideo = composition.AddMutableTrack(AVMediaType.Video, 0);
var videoCompositionInstructions = new AVVideoCompositionInstruction[files.Count];
var index = 0;
var renderSize = new SizeF(480, 480);
var _startTime = CMTime.Zero;
//AVUrlAsset asset;
var asset = new AVUrlAsset(new NSUrl(file, false), new AVUrlAssetOptions());
//var asset = AVAsset.FromUrl(new NSUrl(file, false));
//create an avassetrack with our asset
var videoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0];
var audioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0];
//create a video composition and preset some settings
NSError error;
var assetTimeRange = new CMTimeRange { Start = CMTime.Zero, Duration = asset.Duration };
compositionTrackAudio.InsertTimeRange(new CMTimeRange
{
Start = CMTime.Zero,
Duration = asset.Duration,
}, audioTrack, _startTime, out error);
if (error != null) {
Debug.WriteLine (error.Description);
}
compositionTrackVideo.InsertTimeRange(assetTimeRange, videoTrack, _startTime, out error);
//create a video instruction
var transformer = new AVMutableVideoCompositionLayerInstruction
{
TrackID = videoTrack.TrackID,
};
var audioMix = new AVMutableAudioMix ();
var mixParameters = new AVMutableAudioMixInputParameters{
TrackID = audioTrack.TrackID
};
mixParameters.SetVolumeRamp (1.0f, 1.0f, new CMTimeRange {
Start = CMTime.Zero,
Duration = asset.Duration
});
audioMix.InputParameters = new [] { mixParameters };
var t1 = CGAffineTransform.MakeTranslation(videoTrack.NaturalSize.Height, 0);
//Make sure the square is portrait
var t2 = CGAffineTransform.Rotate(t1, (float)(Math.PI / 2f));
var finalTransform = t2;
transformer.SetTransform(finalTransform, CMTime.Zero);
//add the transformer layer instructions, then add to video composition
var instruction = new AVMutableVideoCompositionInstruction
{
TimeRange = assetTimeRange,
LayerInstructions = new []{ transformer }
};
videoCompositionInstructions[index] = instruction;
index++;
_startTime = CMTime.Add(_startTime, asset.Duration);
var videoComposition = new AVMutableVideoComposition();
videoComposition.FrameDuration = new CMTime(1 , (int)videoTrack.NominalFrameRate);
videoComposition.RenderScale = 1;
videoComposition.Instructions = videoCompositionInstructions;
videoComposition.RenderSize = renderSize;
var exportSession = new AVAssetExportSession(composition, AVAssetExportSession.PresetHighestQuality);
var filePath = _fileSystemManager.TempDirectory + DateTime.UtcNow.Ticks + ".mp4";
var outputLocation = new NSUrl(filePath, false);
exportSession.OutputUrl = outputLocation;
exportSession.OutputFileType = AVFileType.Mpeg4;
exportSession.VideoComposition = videoComposition;
exportSession.AudioMix = audioMix;
exportSession.ShouldOptimizeForNetworkUse = true;
exportSession.ExportAsynchronously(() =>
{
Debug.WriteLine(exportSession.Status);
switch (exportSession.Status)
{
case AVAssetExportSessionStatus.Failed:
{
Debug.WriteLine(exportSession.Error.Description);
Debug.WriteLine(exportSession.Error.DebugDescription);
break;
}
case AVAssetExportSessionStatus.Completed:
{
if (File.Exists(filePath))
{
_uploadService.AddVideoToVideoByteList(File.ReadAllBytes(filePath), ".mp4");
Task.Run(async () =>
{
await _uploadService.UploadVideo(_videoData);
});
}
break;
}
case AVAssetExportSessionStatus.Unknown:
{
break;
}
case AVAssetExportSessionStatus.Exporting:
{
break;
}
case AVAssetExportSessionStatus.Cancelled:
{
break;
}
}
});
所以这是一个非常愚蠢的错误,它是由于在视频之前添加了音轨,所以说明一定是试图将转换应用于音轨而不是我的视频轨道。
我的问题是我忘记设置timeRange了,应该是这样的
let instruction = AVMutableVideoCompositionInstruction()
instruction.layerInstructions = [layer]
instruction.timeRange = CMTimeRange(start: kCMTimeZero, duration: videoDuration)
请注意AVMutableVideoCompostion指令. timeRange
的结束时间必须有效。它与AVAssetExportSession.timeRange
不同
要从源导出的时间范围。导出会话的默认时间范围是kCMTimeZero到kCMTimePositiveInfinity,这意味着(取模文件长度的可能限制)将导出资产的整个持续时间。您可以使用键值观察观察此属性。