iOS图像处理管道,GPUImage与核心图像库相比仍然很强大?(iOS image processing pipeline, is GPUImage still strong comparing with the core image lib?)

我最近在iOS上开发图像处理应用程序,我在OpenCV上有很多经验,但是在iOS甚至OSX上,对我来说都是全新的。

所以我发现主要有用于正常图像处理工作的核心图像库和GPUImage库。 我有兴趣知道应该在iOS平台上选择哪一种作为新功能? 我已经看到在iPhone 6上的iOS 8上进行了一些测试,看起来核心图像比GPUImage基准测试中的GPUImage更快。

我实际上是在寻找一个图像处理开发的完整解决方案,

什么语言 ? Swift,Objective-C或Clang和C ++? 什么库? GPUImage或Core Image或OpenCV或GEGL? 有一个示例应用程序吗?

我的目标是开发一些先进的色彩校正功能,我希望尽可能快,所以在将来我可以使图像处理成为视频处理没有太大问题。

谢谢

I'm newly invovled developing image processing app on iOS, I have lots of experience on OpenCV, however everything is new for me on the iOS even OSX.

So I found there are mainly the core image library and the GPUImage library around for the normal image processing work. I'm insterested in knowing which one should I choose as a new on the iOS platform? I have seen some tests done on iOS 8 on iPhone 6, it appears the core image is faster than the GPUImage on the GPUImage's benchmark now.

I'm actually looking for a whole solution on image processing development,

What language ? Swift, Objective-C or Clang and C++ ? What library ? GPUImage or Core Image or OpenCV or GEGL ? Is there an example app ?

My goal is to develop some advance colour correction functions, I wish to make it as fast as possible, so in future I can make the image processing become video processing without much problem.

Thanks

最满意答案

我是GPUImage的作者,所以你可能会适当地衡量我的话。 我在这里提供了关于这个框架与Core Image设计思想的长篇描述,但我可以重申一下。

基本上,我设计的GPUImage是围绕OpenGL / OpenGL ES图像处理的便捷包装。 它是在iOS上不存在Core Image的时候构建的,即使Core Image在那里启动,它也缺少自定义内核,并且存在一些性能缺陷。

与此同时,Core Image团队在性能方面做出了令人印象深刻的工作,现在Core Image在几个领域略胜过GPUImage。 我仍然在其他人中击败他们,但比以前更接近他们。

我认为这个决定取决于你对应用程序的价值。 GPUImage的完整源代码可供您使用,因此您可以自定义或修复您想要的任何部分。 你可以看看幕后,看看有什么操作。 管道设计的灵活性让我可以尝试目前无法在Core Image中完成的复杂操作。

核心图像是iOS和OS X的标准配置。它被广泛使用(可用的代码很多),性能高,易于设置,并且(截至最新的iOS版本)可通过定制内核扩展。 除了GPU加速处理之外,它还可以执行CPU端处理,这可以让您在后台进程中执行过程映像(尽管您应该可以在iOS 8的后台中执行有限的OpenGL ES工作)。 在我写GPUImage之前,我一直使用Core Image。

对于示例应用程序,请下载GPUImage源代码并查看examples/目录。 您可以在Mac和iOS以及Objective-C和Swift中找到该框架各个方面的示例。 我特别建议在iOS设备上构建和运行FilterShowcase示例,因为它会演示实时视频中框架的每个过滤器。 尝试这是一件有趣的事情。

在语言选择方面,如果性能是您进行视频/图像处理后的话,语言几乎没有什么区别。 您的性能瓶颈不会归因于语言,而是会影响GPU的着色性能以及图像和视频可以上传到GPU或从GPU下载的速度。

GPUImage是用Objective-C编写的,但它仍然可以在它所支持的最老的iOS设备上以60 FPS的速度处理视频帧。 对代码进行分析发现,很少有消息发送开销或内存分配的地方(与C或C ++相比,这种语言中速度最慢的区域)更为明显。 如果这些操作是在CPU上完成的,这将是一个稍微不同的故事,但这都是GPU驱动的。

使用任何适合您的开发需求的语言都是最合适和最容易的。 Core Image和GPUImage都兼容Swift,Objective-C ++或Objective-C。 OpenCV可能需要从Swift中使用垫片,但如果您正在谈论性能,OpenCV可能不是一个好选择。 它将比Core Image或GPUImage慢得多。

就个人而言,为了便于使用,很难与Swift争论,因为我可以用23行非空白代码编写使用GPUImage的整个视频过滤应用程序。

I'm the author of GPUImage, so you might weigh my words appropriately. I provide a lengthy description of my design thoughts on this framework vs. Core Image in my answer here, but I can restate that.

Basically, I designed GPUImage to be a convenient wrapper around OpenGL / OpenGL ES image processing. It was built at a time when Core Image didn't exist on iOS, and even when Core Image launched there it lacked custom kernels and had some performance shortcomings.

In the meantime, the Core Image team has done impressive work on performance, leading to Core Image slightly outperforming GPUImage in several areas now. I still beat them in others, but it's way closer than it used to be.

I think the decision comes down to what you value for your application. The entire source code for GPUImage is available to you, so you can customize or fix any part of it that you want. You can look behind the curtain and see how any operation runs. The flexibility in pipeline design lets me experiment with complex operations that can't currently be done in Core Image.

Core Image comes standard with iOS and OS X. It is widely used (plenty of code available), performant, easy to set up, and (as of the latest iOS versions) is extensible via custom kernels. It can do CPU-side processing in addition to GPU-acelerated processing, which lets you do things like process images in a background process (although you should be able to do limited OpenGL ES work in the background in iOS 8). I used Core Image all the time before I wrote GPUImage.

For sample applications, download the GPUImage source code and look in the examples/ directory. You'll find examples of every aspect of the framework for both Mac and iOS, as well as both Objective-C and Swift. I particularly recommend building and running the FilterShowcase example on your iOS device, as it demonstrates every filter from the framework on live video. It's a fun thing to try.

In regards to language choice, if performance is what you're after for video / image processing, language makes little difference. Your performance bottlenecks will not be due to language, but will be in shader performance on the GPU and the speed at which images and video can be uploaded to / downloaded from the GPU.

GPUImage is written in Objective-C, but it can still process video frames at 60 FPS on even the oldest iOS devices it supports. Profiling the code finds very few places where message sending overhead or memory allocation (the slowest areas in this language compared with C or C++) is even noticeable. If these operations were done on the CPU, this would be a slightly different story, but this is all GPU-driven.

Use whatever language is most appropriate and easiest for your development needs. Core Image and GPUImage are both compatible with Swift, Objective-C++, or Objective-C. OpenCV might require a shim to be used from Swift, but if you're talking performance OpenCV might not be a great choice. It will be much slower than either Core Image or GPUImage.

Personally, for ease of use it can be hard to argue with Swift, since I can write an entire video filtering application using GPUImage in only 23 lines of non-whitespace code.

iOS图像处理管道,GPUImage与核心图像库相比仍然很强大?(iOS image processing pipeline, is GPUImage still strong comparing with the core image lib?)

我最近在iOS上开发图像处理应用程序,我在OpenCV上有很多经验,但是在iOS甚至OSX上,对我来说都是全新的。

所以我发现主要有用于正常图像处理工作的核心图像库和GPUImage库。 我有兴趣知道应该在iOS平台上选择哪一种作为新功能? 我已经看到在iPhone 6上的iOS 8上进行了一些测试,看起来核心图像比GPUImage基准测试中的GPUImage更快。

我实际上是在寻找一个图像处理开发的完整解决方案,

什么语言 ? Swift,Objective-C或Clang和C ++? 什么库? GPUImage或Core Image或OpenCV或GEGL? 有一个示例应用程序吗?

我的目标是开发一些先进的色彩校正功能,我希望尽可能快,所以在将来我可以使图像处理成为视频处理没有太大问题。

谢谢

I'm newly invovled developing image processing app on iOS, I have lots of experience on OpenCV, however everything is new for me on the iOS even OSX.

So I found there are mainly the core image library and the GPUImage library around for the normal image processing work. I'm insterested in knowing which one should I choose as a new on the iOS platform? I have seen some tests done on iOS 8 on iPhone 6, it appears the core image is faster than the GPUImage on the GPUImage's benchmark now.

I'm actually looking for a whole solution on image processing development,

What language ? Swift, Objective-C or Clang and C++ ? What library ? GPUImage or Core Image or OpenCV or GEGL ? Is there an example app ?

My goal is to develop some advance colour correction functions, I wish to make it as fast as possible, so in future I can make the image processing become video processing without much problem.

Thanks

最满意答案

我是GPUImage的作者,所以你可能会适当地衡量我的话。 我在这里提供了关于这个框架与Core Image设计思想的长篇描述,但我可以重申一下。

基本上,我设计的GPUImage是围绕OpenGL / OpenGL ES图像处理的便捷包装。 它是在iOS上不存在Core Image的时候构建的,即使Core Image在那里启动,它也缺少自定义内核,并且存在一些性能缺陷。

与此同时,Core Image团队在性能方面做出了令人印象深刻的工作,现在Core Image在几个领域略胜过GPUImage。 我仍然在其他人中击败他们,但比以前更接近他们。

我认为这个决定取决于你对应用程序的价值。 GPUImage的完整源代码可供您使用,因此您可以自定义或修复您想要的任何部分。 你可以看看幕后,看看有什么操作。 管道设计的灵活性让我可以尝试目前无法在Core Image中完成的复杂操作。

核心图像是iOS和OS X的标准配置。它被广泛使用(可用的代码很多),性能高,易于设置,并且(截至最新的iOS版本)可通过定制内核扩展。 除了GPU加速处理之外,它还可以执行CPU端处理,这可以让您在后台进程中执行过程映像(尽管您应该可以在iOS 8的后台中执行有限的OpenGL ES工作)。 在我写GPUImage之前,我一直使用Core Image。

对于示例应用程序,请下载GPUImage源代码并查看examples/目录。 您可以在Mac和iOS以及Objective-C和Swift中找到该框架各个方面的示例。 我特别建议在iOS设备上构建和运行FilterShowcase示例,因为它会演示实时视频中框架的每个过滤器。 尝试这是一件有趣的事情。

在语言选择方面,如果性能是您进行视频/图像处理后的话,语言几乎没有什么区别。 您的性能瓶颈不会归因于语言,而是会影响GPU的着色性能以及图像和视频可以上传到GPU或从GPU下载的速度。

GPUImage是用Objective-C编写的,但它仍然可以在它所支持的最老的iOS设备上以60 FPS的速度处理视频帧。 对代码进行分析发现,很少有消息发送开销或内存分配的地方(与C或C ++相比,这种语言中速度最慢的区域)更为明显。 如果这些操作是在CPU上完成的,这将是一个稍微不同的故事,但这都是GPU驱动的。

使用任何适合您的开发需求的语言都是最合适和最容易的。 Core Image和GPUImage都兼容Swift,Objective-C ++或Objective-C。 OpenCV可能需要从Swift中使用垫片,但如果您正在谈论性能,OpenCV可能不是一个好选择。 它将比Core Image或GPUImage慢得多。

就个人而言,为了便于使用,很难与Swift争论,因为我可以用23行非空白代码编写使用GPUImage的整个视频过滤应用程序。

I'm the author of GPUImage, so you might weigh my words appropriately. I provide a lengthy description of my design thoughts on this framework vs. Core Image in my answer here, but I can restate that.

Basically, I designed GPUImage to be a convenient wrapper around OpenGL / OpenGL ES image processing. It was built at a time when Core Image didn't exist on iOS, and even when Core Image launched there it lacked custom kernels and had some performance shortcomings.

In the meantime, the Core Image team has done impressive work on performance, leading to Core Image slightly outperforming GPUImage in several areas now. I still beat them in others, but it's way closer than it used to be.

I think the decision comes down to what you value for your application. The entire source code for GPUImage is available to you, so you can customize or fix any part of it that you want. You can look behind the curtain and see how any operation runs. The flexibility in pipeline design lets me experiment with complex operations that can't currently be done in Core Image.

Core Image comes standard with iOS and OS X. It is widely used (plenty of code available), performant, easy to set up, and (as of the latest iOS versions) is extensible via custom kernels. It can do CPU-side processing in addition to GPU-acelerated processing, which lets you do things like process images in a background process (although you should be able to do limited OpenGL ES work in the background in iOS 8). I used Core Image all the time before I wrote GPUImage.

For sample applications, download the GPUImage source code and look in the examples/ directory. You'll find examples of every aspect of the framework for both Mac and iOS, as well as both Objective-C and Swift. I particularly recommend building and running the FilterShowcase example on your iOS device, as it demonstrates every filter from the framework on live video. It's a fun thing to try.

In regards to language choice, if performance is what you're after for video / image processing, language makes little difference. Your performance bottlenecks will not be due to language, but will be in shader performance on the GPU and the speed at which images and video can be uploaded to / downloaded from the GPU.

GPUImage is written in Objective-C, but it can still process video frames at 60 FPS on even the oldest iOS devices it supports. Profiling the code finds very few places where message sending overhead or memory allocation (the slowest areas in this language compared with C or C++) is even noticeable. If these operations were done on the CPU, this would be a slightly different story, but this is all GPU-driven.

Use whatever language is most appropriate and easiest for your development needs. Core Image and GPUImage are both compatible with Swift, Objective-C++, or Objective-C. OpenCV might require a shim to be used from Swift, but if you're talking performance OpenCV might not be a great choice. It will be much slower than either Core Image or GPUImage.

Personally, for ease of use it can be hard to argue with Swift, since I can write an entire video filtering application using GPUImage in only 23 lines of non-whitespace code.