Open Projects

Preparing for Google Summer of Code 2022

Communications and logistics

Google Summer of Code 2022 Project Ideas

For information about the application process, see the Google Summer of Code Application Guide.

For project ideas from past Google Summers of Code, see Past Google Summer of Code Project Ideas.

Just to be clear, all projects require a Linux device to run libcamera. If no hardware requirement is listed, then the device should have at least a USB webcam, though vimc is acceptable as well unless specified otherwise. Note that the kernel may have to be compiled manually with the vimc module enabled, as many Linux distributions don’t have it enabled by default. Of course, any other platform and camera that is supported by libcamera may be used instead. Getting the environment up and running is part of the warmup tasks for all projects.

For the warmup tasks that involve building or experimenting with a test application, simple-cam is a good starter example.

Improve GStreamer element to add support for properties

Description of the project:
The GStreamer element allows libcamera to be used as a video source in GStreamer. This project aims to add support for properties in the GStreamer element, so that more aspects of the stream can be controlled, such as framerate. There was already a patch series that started the effort of setting the framerate; this can be used as a starting point.
Expected results:
The ability to set properties in the GStreamer element, such as framerate.
Confirmed mentor:
Paul Elder, Vedant Paranjape
Desirable skills:
Good knowledge of C++. Some knowledge of GStreamer will be beneficial.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera with the GStreamer element enabled
  • Stream using GStreamer from the libcamera element
  • Explore how controls work in libcamera. Building a test application that uses libcamera (or extend cam) that can set controls might help.
  • Explore GStreamer properties
  • How would you connect GStreamer properties to libcamera controls? This will form the design of your project.

Adding support for controls in the simple pipeline handler

Description of the project:

The simple pipeline handler in libcamera is for devices that have no ISP. Currently it does not support controls. The goal of this project is to add controls to the simple pipeline handler, for both setting and reporting.

This project has a hardware requirement. It requires a device supported by the simple pipeline handler. The PinePhone, Librem 5, DragonBoard 410c, and 820c are some examples (the DragonBoards will need a camera expansion board). In addition, the device must already boot mainline Linux with at least version 5.4 (otherwise all of the Google Summer of Code time will be spent getting it to boot).

Expected results:
Ability to set and report controls from the simple pipeline handler.
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera with the simple pipeline handler enabled
  • Run cam/qcam on a device that the simple pipeline handler supports (see the hardware requirements in the project idea description)
  • Explore how controls work in libcamera. Building a test application that uses libcamera that can set controls might help.
  • What kinds of controls would you add to the simple pipeline handler? How would you plumb them in? This will form the design of your project.

Adding support for controls in the libcamera test applications

Description of the project:

The libcamera test applications, cam and qcam, currently do not support setting and changing controls. The goal of this project would be to add support to either for setting and changing controls.

For cam, this will also involved figuring out how to pass when the controls should be changed to what values. For qcam, this also involves designing the GUI, with how to display all the options.

Expected results:
Ability to set and change controls in cam and/or qcam.
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++.
Expected size:
175 hour
Difficulty rating:
Easy

Warmup tasks

  • Build libcamera with cam/qcam enabled
  • Run cam/qcam
  • Explore how controls work in libcamera. Building a test application that uses libcamera that can set controls might help.
  • How do you envision the interface in cam/qcam for setting controls? How would you plumb it in? This will form the design of your project.

OpenGL/OpenCL software ISP

Description of the project:

Image Signal Processors (ISPs) implement the functions necessary in an image pipeline, to transform a raw image from a sensor into a meaningful image that can be displayed. Examples of such functions are debayering, demosaicing, noise reduction, etc. Most ISPs are currently implemented in hardware, so a software-based ISP would be useful for testing and experimentation in the absence of a hardware ISP, and would be an open source ISP implementation.

This project requires interfacing with a GPU for computation. There are a few ways to do this, such as with OpenCL or OpenGL compute shaders. Thus choosing an API is a required task for this project (it’s best to explore/experiment and choose before the project begins). In addition to being able to use the GPU for computation, the platform that this project will be developed on requires the ability to capture raw Bayer images. This can be done either via vimc (which needs to be fixed first), or with a CSI camera on one of the platforms that libcamera currently supports.

The easiest option is to use the vimc virtual driver on a regular computer. The raw Bayer capture implementation in the vimc driver is however broken at the moment, and isn’t supported yet in libcamera. These issues would need to be addressed first.

The options for OpenCL may be more limited. These options include a Raspberry Pi 3B+, which has an unofficial OpenCL implementation, or a Rockchip RK3399 device, which might require closed-source Mali drivers, or an i.MX device that has a raw Bayer sensor and OpenCL support, or feeding images manually to a software ISP implementation.

There is also the option of using OpenGL compute shaders instead of OpenCL, which may be supported on a wide range of platforms.

Expected results:
A software ISP that implements some amount of ISP functions
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of OpenGL or OpenCL would also be beneficial.
Expected size:
350 hour
Difficulty rating:
Hard

Warmup tasks

Intro

  • Build libcamera with the raspberrypi, rkisp1, simple, or vimc pipeline enabled
  • Read the project description detailing the hardware dependencies
    • If you have a Raspberry Pi 3, RK3399 device, or i.MX, run cam/qcam on it, and see if you can run OpenCL on it
    • If you have a device that won’t run OpenCL, you can try to run OpenGL compute shaders instead.
    • If you don’t have any of the above devices, then run with vimc
  • If you’re thinking about going the vimc route, fix the vimc raw capture functionality in the vimc kernel driver. This might be a sizable piece of work on its own, so studying the vimc driver and planning out how you would do the fix may be sufficient. See this bug report for more information.
  • Write a standalone OpenCL or OpenGL application which takes an image and applies color gains (or some other ISP function of your choice)

These exploration tasks should give you enough idea on which platform and route you would want to take.

Learning how an ISP is used

Designing the proposal

We want to be able to have a generic ISP interface that can be used for different implementations of the software ISP. What kind of interface would you design? This is the first step of the design of your project. After this is solidified, think about how you would implement a GPU-based software ISP?

Integrating libcamera into applications

Description of the project:

libcamera is a library for applications to use for access and control of cameras. This project aims to add support for libcamera to other applications that need to use cameras in Linux, as they will benefit from using libcamera rather than V4L2 as cameras get more complex.

Note that this project will involve contributing code to other projects.

Some applications that we would like to add support for libcamera to:

  • OpenCV
  • OBS

OpenCV is an open source computer vision library. It has facilities to interface with various video sources, including cameras.

OBS is a software for live streaming and screen recording. It has support for various media sources to organize into a stream, including cameras. There is a minimal implementation for support for libcamera in OBS already that may be used as a starting point.

Expected results:
The application can use libcamera cameras as a media input
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++. Some knowledge of V4L2 and the application in which libcamera will be added to would also be beneficial.
Expected size:
175/350 hour (depends on the amount of features that will be integrated)
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera, as well as the application of your choice
  • Study the libcamera interface. Building a test application, or reading both simple-cam and the libcamera documentation can help.
  • Study how the application of your choice interfaces with its video devices. If documentation is available, study that. If not, study how the application interfaces with other video devices, such as V4L2, which is what is usually currently used for Linux systems.
  • Design how you would connect the application’s video interface with libcamera. This will be the roadmap of the implementation.

vimc multistream support

Description of the project:
vimc is a driver that emulates complex video hardware, and is useful for testing libcamera without needing access to a physical camera. We would like to add support to the libcamera vimc pipeline handler for multiple simultaneous streams, to ease testing of such mechanism. This also requires adding multistream support to the vimc driver in the Linux kernel.
Expected results:
A working vimc driver and pipeline handler that supports streaming multiple streams simultaneously
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of V4L2 would also be beneficial.
Expected size:
350 hour
Difficulty rating:
Medium

Warmup tasks

Intro

  • Build libcamera with the vimc pipeline enabled
  • Stream frames from vimc with the qcam application and confirm that it works
  • There was previous work on this: https://patchwork.libcamera.org/project/libcamera/list/?series=1127
    • Apply these patches and see what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed).
  • This was the previous work on the vimc kernel driver: https://lore.kernel.org/linux-media/20200819180442.11630-1-kgupta@es.iitr.ac.in/
    • Apply these patches to the kernel, and run with the pipeline again. See what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed)
    • We highly recommend that you run this in a VM, as there is a risk of crashing your kernel during development.
    • Also study the patches of course. What do they do?

Designing the proposal

Now that you know the previous work, and the goal, how would you add support for multistream in the vimc pipeline handler in libcamera?

Improving IPC

Description of the project:

libcamera currently allows IPAs to be sandboxed in a separate process. To support this, there is an IPC mechanism to allow the pipeline handler and IPA to communicate. It is designed in such a way that neither the pipeline handler nor the IPA know if they are running sandboxed or not. In addition, each pipeline handler and IPA can use their own set of C++ functions instead of one common API for all pipeline handlers.

This is implemented with code generation, that takes an interface definition file, and generates serializers, deserializers, and proxies (see the IPA guide for more information). There is a lot of taking subarrays involved, and the current implementation copies vectors to achieve this. In addition, the serialized format is custom.

The goal of this project is to reduce the amount of vector copies by replacing them with Spans, and to support the mojo serialized format.

Note that not both of these need to be achieved; it can be just one or the other.

Expected results:
More efficient IPC, serialized format of data is in the mojo format
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++, Python 3, and jinja2.
Expected size:
350 hour
Difficulty rating:
Hard

Warmup tasks

  • Build libcamera on a platform that has an IPA (vimc will suffice)
  • Run cam with IPA isolation enabled (set the environment variable LIBCAMERA_IPA_FORCE_ISOLATION to 1) and confirm that it works
    • If it doesn’t work, we’ll work together on fixing it.
    • If you are using vimc, then we’ll have to extend the IPA interface so that it’s more appropriate for testing.
  • Study the code generation code (utils/ipc/generators/libcamera_templates/* and utils/ipc/generators/mojom_libcamera_generator.py). It might be difficult to grasp at first. It might be useful to compare the generated code (in $BUILDDIR/include/libcamera/ipa/) with the templates. Reading the documentation (Documentation/guides/ipa.rst) would help as well.
    • Studying the generated IPADataSerializers to learn how IPADataSerializer works would be useful as well.
  • Practice using Spans. You can study the Span unit test for this, or writing experimentation code.
  • How would you plumb in Spans and use them to replace vectors? Or, how would you restructure the serialization to be in the mojo archive format instead of the current custom format? (Depends on the chosen goal.) This will form the design of your project.

Adding UVC hardware timestamp support

Description of the project:

libcamera reports information in each frame that is output from the camera; one of these is the timestamp. At the moment, this timestamp is the one that is sampled by the Linux kernel when it completes the buffer. However, for cameras compatible with the USB video class (which are virtually all USB webcams on the market today), we can get get more accurate timestamps than this by using information from the UVC packet headers, which has hardware timestamps and other clock information. This information is already reported via the V4L2 metadata API.

The goal would be to capture this information in the UVC pipeline handler and calculate more accurate timestamps. There is a reference implementation and information in the kernel, but due to kernel limitations it can’t use floating point calculations so it’s hard to improve on.

A UVC webcam is required for this project.

Expected results:
More accurate timestamps for completed frames
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C/C++. Some knowledge of V4L2 would also be beneficial.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera on a platform with a USB webcam
  • Run cam
  • Study how you would get the information from the UVC headers via the V4L2 metadata API.
  • Study the conversion function mentioned in the project description
  • How would you plumb the UVC header information through from V4L2 to the UVC pipeline handler, and how would you implement the timestamp conversion functions?

OMAP3 ISP pipeline handler

Description of the project:

Add support for OMAP3 devices to libcamera.

This project has a hardware requirement. In addition, the device must already boot mainline Linux with at least version 5.4 (otherwise all of the Google Summer of Code time will be spent getting it to boot). Some examples of OMAP3 devices are:

  • BeagleBoard-xM (with a suitable camera module)
  • Motorola Droid/Milestone
  • Motorola Droid 2
  • Motorola Droid X
  • Nokia N9
  • Nokia N900
  • Nokia N950
Expected results:
libcamera is usable on the OMAP3 device
Confirmed mentor:
Paul Elder, Laurent Pinchart, Vedant Paranjape
Desirable skills:
Good knowledge of C++.
Expected size:
350 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera for the OMAP3 device of your choice (ie. prepare the development and testing environment).
  • Study the pipeline handler writer’s guide (Documentation/guides/pipeline-handler.rst). Actually following its steps and creating the pipeline handler for vivid might be useful in understading the process. Studying how other complex pipeline handlers (ipu3, rkisp1, raspberrypi) work might be useful as well.
  • Study how the OMAP3 ISP works. Setting up the pipeline with media-ctl and capturing with yavta might help.
  • How would you design the OMAP3 pipeline handler? This will serve as the roadmap for your implementation.

Other warmup tasks

  • The vimc scaler is currently hardcoded in the kernel driver to multiples of 3. Turn this into a variable-ratio scaler in the driver, and adapt the libcamera vimc pipeline handler accordingly.
  • Implement V4L2 controls and selection rectangles in the vimc driver that libcamera wants in the vimc sensor entity.
  • Another medium-sized task is to support the UVC XU API in the UVC pipeline handler. It requires a Logitech webcam as these are the only ones for which we have XU documentation. The goal would be to expose libcamera controls for XU controls, without going through creating mappings between XU controls and V4L2 controls in the kernel. Here are some resources:
  • Another related task is parsing UVC metadata to generate better timestamps. There’s an implementation of this in the kernel driver, it’s broken, and it would be much better handled in userspace.