Past Google Summer of Code Project Ideas

Google Summer of Code 2022 Project Ideas

Improve GStreamer element to add support for properties

Description of the project:
The GStreamer element allows libcamera to be used as a video source in GStreamer. This project aims to add support for properties in the GStreamer element, so that more aspects of the stream can be controlled, such as framerate. There was already a patch series that started the effort of setting the framerate; this can be used as a starting point.
Expected results:
The ability to set properties in the GStreamer element, such as framerate.
Confirmed mentor:
Paul Elder, Vedant Paranjape
Desirable skills:
Good knowledge of C++. Some knowledge of GStreamer will be beneficial.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera with the GStreamer element enabled
  • Stream using GStreamer from the libcamera element
  • Explore how controls work in libcamera. Building a test application that uses libcamera (or extend cam) that can set controls might help.
  • Explore GStreamer properties
  • How would you connect GStreamer properties to libcamera controls? This will form the design of your project.

Adding support for controls in the libcamera test applications

Description of the project:

The libcamera test applications, cam and qcam, currently do not support setting and changing controls. The goal of this project would be to add support to either for setting and changing controls.

For cam, this will also involved figuring out how to pass when the controls should be changed to what values. For qcam, this also involves designing the GUI, with how to display all the options.

Expected results:
Ability to set and change controls in cam and/or qcam.
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++.
Expected size:
175 hour
Difficulty rating:
Easy

Warmup tasks

  • Build libcamera with cam/qcam enabled
  • Run cam/qcam
  • Explore how controls work in libcamera. Building a test application that uses libcamera that can set controls might help.
  • How do you envision the interface in cam/qcam for setting controls? How would you plumb it in? This will form the design of your project.

OpenGL/OpenCL software ISP

Description of the project:

Image Signal Processors (ISPs) implement the functions necessary in an image pipeline, to transform a raw image from a sensor into a meaningful image that can be displayed. Examples of such functions are debayering, demosaicing, noise reduction, etc. Most ISPs are currently implemented in hardware, so a software-based ISP would be useful for testing and experimentation in the absence of a hardware ISP, and would be an open source ISP implementation.

This project requires interfacing with a GPU for computation. There are a few ways to do this, such as with OpenCL or OpenGL compute shaders. Thus choosing an API is a required task for this project (it’s best to explore/experiment and choose before the project begins). In addition to being able to use the GPU for computation, the platform that this project will be developed on requires the ability to capture raw Bayer images. This can be done either via vimc (which needs to be fixed first), or with a CSI camera on one of the platforms that libcamera currently supports.

The easiest option is to use the vimc virtual driver on a regular computer. The raw Bayer capture implementation in the vimc driver is however broken at the moment, and isn’t supported yet in libcamera. These issues would need to be addressed first.

The options for OpenCL may be more limited. These options include a Raspberry Pi 3B+, which has an unofficial OpenCL implementation, or a Rockchip RK3399 device, which might require closed-source Mali drivers, or an i.MX device that has a raw Bayer sensor and OpenCL support, or feeding images manually to a software ISP implementation.

There is also the option of using OpenGL compute shaders instead of OpenCL, which may be supported on a wide range of platforms.

Expected results:
A software ISP that implements some amount of ISP functions
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of OpenGL or OpenCL would also be beneficial.
Expected size:
350 hour
Difficulty rating:
Hard

Warmup tasks

Intro

  • Build libcamera with the raspberrypi, rkisp1, simple, or vimc pipeline enabled
  • Read the project description detailing the hardware dependencies
    • If you have a Raspberry Pi 3, RK3399 device, or i.MX, run cam/qcam on it, and see if you can run OpenCL on it
    • If you have a device that won’t run OpenCL, you can try to run OpenGL compute shaders instead.
    • If you don’t have any of the above devices, then run with vimc
  • If you’re thinking about going the vimc route, fix the vimc raw capture functionality in the vimc kernel driver. This might be a sizable piece of work on its own, so studying the vimc driver and planning out how you would do the fix may be sufficient. See this bug report for more information.
  • Write a standalone OpenCL or OpenGL application which takes an image and applies color gains (or some other ISP function of your choice)

These exploration tasks should give you enough idea on which platform and route you would want to take.

Learning how an ISP is used

Designing the proposal

We want to be able to have a generic ISP interface that can be used for different implementations of the software ISP. What kind of interface would you design? This is the first step of the design of your project. After this is solidified, think about how you would implement a GPU-based software ISP?

Google Summer of Code 2021 Project Ideas

Multistream support in GStreamer element

Description of the project:

The GStreamer libcamera element allows libcamera to be used in GStreamer pipelines. libcamera supports simultaneous streaming, for example having lower quality one stream for preview, while another higher quality one is used for recording. We would like to extend support for the GStreamer libcamera element to support this multistreaming.

This project requires a Raspberry Pi or Rockchip RK3399, such as a ROCK Pi 4, until vimc supports multiple streams.

Expected results:
A working GStreamer libcamera element that supports streaming multiple streams simultaneously
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++.

vimc multistream support

Description of the project:
vimc is a driver that emulates complex video hardware, and is useful for testing libcamera without needing access to a physical camera. We would like to add support to the libcamera vimc pipeline handler for multiple simultaneous streams, to ease testing of such mechanism. This also requires adding multistream support to the vimc driver in the Linux kernel.
Expected results:
A working vimc driver and pipeline handler that supports streaming multiple streams simultaneously
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of V4L2 would also be beneficial.

OpenGL/OpenCL software ISP

Description of the project:

Image Signal Processors (ISPs) implement the functions necessary in an image pipeline, to transform a raw image from a sensor into a meaningful image that can be displayed. Examples of such functions are debayering, demosaicing, noise reduction, etc. Most ISPs are currently implemented in hardware, so a software-based ISP would be useful for testing and experimentation in the absence of a hardware ISP, and would be an open source ISP implementation.

This project requires interfacing with a GPU for computation. There are a few ways to do this, such as with OpenCL or OpenGL compute shaders. Thus choosing an API is a required task for this project (it’s best to explore/experiment and choose before the project begins). In addition to being able to use the GPU for computation, the platform that this project will be developed on requires the ability to capture raw Bayer images. This can be done either via vimc (which needs to be fixed first), or with a CSI camera on one of the platforms that libcamera currently supports.

The easiest option is to use the vimc virtual driver on a regular computer. The raw Bayer capture implementation in the vimc driver is however broken at the moment, and isn’t supported yet in libcamera. These issues would need to be addressed first.

The options for OpenCL may be more limited. These options include a Raspberry Pi 3B+, which has an unofficial OpenCL implementation, or a Rockship RK3399 device, which might require closed-source Mali drivers, or an i.MX device that has a raw Bayer sensor and OpenCL support, or feeding images manually to a software ISP implementation.

There is also the option of using OpenGL compute shaders instead of OpenCL, which may be supported on a wide range of platforms.

Expected results:
A software ISP that implements some amount of ISP functions
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of OpenGL or OpenCL would also be beneficial.

Warmup tasks for Google Summer of Code 2021

These are some ideas for tasks for warming up for Google Summer of Code 2021. Look at the section(s) relevant to the project ideas that you’re interested in.

These warmup tasks are designed to guide your exploration into libcamera. You don’t have to do all of them, these are just guidance to what you might want to look at to help shape your proposal and your project.

Remember that for all of these steps, you are not alone! You can talk to us anytime on IRC or email (although keep in mind that there may be timezone differences).

Communications and logistics

Multistream support in GStreamer element

Intro

  • Build the GStreamer element
  • Use the libcamera GStreamer element in some GStreamer pipeline. Play with it! Have fun!
  • If you see any problems, try to debug and fix it. Don’t worry about completing it, but log what you wanted to do, what you tried, and what the results were.

Learning the interface of libcamera and GStreamer

  • Go learn about how GStreamer elements work, read their tutorials, maybe make a mini GStreamer element.
  • Learn how the libcamera public API works, by making your own mini libcamera app (just streaming frames from a webcam)
    • Implement setting controls (doesn’t need to have a fancy UI)
  • Go study the libcamera GStreamer element (if you haven’t already in the intro)
  • Write a unit test (in test/) for the existing GStreamer element (with a single stream) and submit it to libcamera in a patch.

Designing the proposal

Now that you know how GStreamer and libcamera work, how would you add support for multistream in the libcamera GStreamer element? Note that this is the proposal that you will submit during the student application phase.

vimc multistream support

Intro

  • Build libcamera with the vimc pipeline enabled
  • Stream frames from vimc with the qcam application and confirm that it works
  • There was previous work on this: https://patchwork.libcamera.org/project/libcamera/list/?series=1127
    • Apply these patches and see what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed).
  • This was the previous work on the vimc kernel driver: https://lore.kernel.org/linux-media/20200819180442.11630-1-kgupta@es.iitr.ac.in/
    • Apply these patches to the kernel, and run with the pipeline again. See what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed)
    • We highly recommend that you run this in a VM, as there is a risk of crashing your kernel during development.
    • Also study the patches of course. What do they do?

Designing the proposal

Now that you know the previous work, and the goal, how would you add support for multistream in the vimc pipeline handler in libcamera?

OpenGL/OpenCL software ISP

Intro

  • Build libcamera with the raspberrypi, rkisp1, simple, or vimc pipeline enabled
  • Read the project description detailing the hardware dependencies
    • If you have a Raspberry Pi 3, RK3399 device, or i.MX, run cam/qcam on it, and see if you can run OpenCL on it
    • If you have a device that won’t run OpenCL, you can try to run OpenGL compute shaders instead.
    • If you don’t have any of the above devices, then run with vimc
  • If you’re thinking about going the vimc route, fix the vimc raw capture functionality in the vimc kernel driver. This might be a sizable piece of work on its own, so studying the vimc driver and planning out how you would do the fix may be sufficient. See this bug report for more information.
  • Write a standalone OpenCL or OpenGL application which takes an image and applies color gains (or some other ISP function of your choice)

These exploration tasks should give you enough idea on which platform and route you would want to take.

Learning how an ISP is used

Designing the proposal

Now that you know how an ISP is meant to be used, how would you implement a GPU-based software ISP?

Other training tasks

  • The vimc scaler is currently hardcoded in the kernel driver to multiples of 3. Turn this into a variable-ratio scaler in the driver, and adapt the libcamera vimc pipeline handler accordingly.
  • Implement V4L2 controls and selection rectangles in the vimc driver that libcamera wants in the vimc sensor entity.
  • Another medium-sized task is to support the UVC XU API in the UVC pipeline handler. It requires a Logitech webcam as these are the only ones for which we have XU documentation. The goal would be to expose libcamera controls for XU controls, without going through creating mappings between XU controls and V4L2 controls in the kernel. Here are some resources:
  • Another related task is parsing UVC metadata to generate better timestamps. There’s an implementation of this in the kernel driver, it’s broken, and it would be much better handled in userspace.