Open Projects

Preparing for Google Summer of Code 2023

Communications and logistics

Google Summer of Code 2023 Project Ideas

For information about the application process, see the Google Summer of Code Application Guide.

For project ideas from past Google Summers of Code, see Past Google Summer of Code Project Ideas.

Just to be clear, all projects require a Linux device to run libcamera. If no hardware requirement is listed, then the device should have at least a USB webcam, though vimc is acceptable as well unless specified otherwise. Note that the kernel may have to be compiled manually with the vimc module enabled, as many Linux distributions don’t have it enabled by default. Of course, any other platform and camera that is supported by libcamera may be used instead. Getting the environment up and running is part of the warmup tasks for all projects.

For the warmup tasks that involve building or experimenting with a test application, simple-cam is a good starter example.

Note that although the list of desirable skills for all the projects are not very long, this is because we expect the student to be able to learn the necessary skills while doing the project. We do not want to make the student wander and have to reinvent all the wheels, but we also don’t want to spoon-feed everything. To this end, the ability to read compiler error messages is a must-have, as well as the ability to critically think and reason about how the software is designed and interconnected. The mentors will be there to support the student in understanding necessary concepts and guiding the student to solutions.

Adding support for controls in the simple pipeline handler

Description of the project:

The simple pipeline handler in libcamera is for devices that have no ISP. Currently it does not support controls. The goal of this project is to add controls to the simple pipeline handler, for both setting and reporting.

This project has a hardware requirement. It requires a device supported by the simple pipeline handler. The PinePhone, Librem 5, DragonBoard 410c, and 820c are some examples (the DragonBoards will need a camera expansion board). In addition, the device must already boot mainline Linux with at least version 5.4 (otherwise all of the Google Summer of Code time will be spent getting it to boot).

Expected results:
Ability to set and report controls from the simple pipeline handler.
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera with the simple pipeline handler enabled
  • Run cam/qcam on a device that the simple pipeline handler supports (see the hardware requirements in the project idea description)
  • Explore how controls work in libcamera. Building a test application that uses libcamera that can set controls might help.
  • What kinds of controls would you add to the simple pipeline handler? How would you plumb them in? This will form the design of your project.

Integrating libcamera into applications

Description of the project:

libcamera is a library for applications to use for access and control of cameras. This project aims to add support for libcamera to other applications that need to use cameras in Linux, as they will benefit from using libcamera rather than V4L2 as cameras get more complex.

Note that this project will involve contributing code to other projects.

Some applications that we would like to add support for libcamera to:

  • OpenCV

OpenCV is an open source computer vision library. It has facilities to interface with various video sources, including cameras.

Expected results:
The application can use libcamera cameras as a media input
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++. Some knowledge of V4L2 and the application in which libcamera will be added to would also be beneficial.
Expected size:
175/350 hour (depends on the amount of features that will be integrated)
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera, as well as the application of your choice
  • Study the libcamera interface. Building a test application, or reading both simple-cam and the libcamera documentation can help.
  • Study how the application of your choice interfaces with its video devices. If documentation is available, study that. If not, study how the application interfaces with other video devices, such as V4L2, which is what is usually currently used for Linux systems.
  • Design how you would connect the application’s video interface with libcamera. This will be the roadmap of the implementation.

vimc multistream support

Description of the project:
vimc is a driver that emulates complex video hardware, and is useful for testing libcamera without needing access to a physical camera. We would like to add support to the libcamera vimc pipeline handler for multiple simultaneous streams, to ease testing of such mechanism. This also requires adding multistream support to the vimc driver in the Linux kernel.
Expected results:
A working vimc driver and pipeline handler that supports streaming multiple streams simultaneously
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C and C++. Some knowledge of V4L2 would also be beneficial.
Expected size:
350 hour
Difficulty rating:
Medium

Warmup tasks

Intro

  • Build libcamera with the vimc pipeline enabled
  • Stream frames from vimc with the qcam application and confirm that it works
  • There was previous work on this: https://patchwork.libcamera.org/project/libcamera/list/?series=1127
    • Apply these patches and see what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed).
  • This was the previous work on the vimc kernel driver: https://lore.kernel.org/linux-media/20200819180442.11630-1-kgupta@es.iitr.ac.in/
    • Apply these patches to the kernel, and run with the pipeline again. See what happens. Try to figure out what’s not working and why (probably will need to change the patches as the codebase has changed)
    • We highly recommend that you run this in a VM, as there is a risk of crashing your kernel during development.
    • Also study the patches of course. What do they do?

Designing the proposal

Now that you know the previous work, and the goal, how would you add support for multistream in the vimc pipeline handler in libcamera?

Improving IPC

Description of the project:

libcamera currently allows IPAs to be sandboxed in a separate process. To support this, there is an IPC mechanism to allow the pipeline handler and IPA to communicate. It is designed in such a way that neither the pipeline handler nor the IPA know if they are running sandboxed or not. In addition, each pipeline handler and IPA can use their own set of C++ functions instead of one common API for all pipeline handlers.

This is implemented with code generation, that takes an interface definition file, and generates serializers, deserializers, and proxies (see the IPA guide for more information). There is a lot of taking subarrays involved, and the current implementation copies vectors to achieve this. In addition, the serialized format is custom.

The goal of this project is to reduce the amount of vector copies by replacing them with Spans, and to support the mojo serialized format.

Note that not both of these need to be achieved; it can be just one or the other.

Expected results:
More efficient IPC, serialized format of data is in the mojo format
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C++, Python 3, and jinja2.
Expected size:
350 hour
Difficulty rating:
Hard

Warmup tasks

  • Build libcamera on a platform that has an IPA (vimc will suffice)
  • Run cam with IPA isolation enabled (set the environment variable LIBCAMERA_IPA_FORCE_ISOLATION to 1) and confirm that it works
    • If it doesn’t work, we’ll work together on fixing it.
    • If you are using vimc, then we’ll have to extend the IPA interface so that it’s more appropriate for testing.
  • Study the code generation code (utils/ipc/generators/libcamera_templates/* and utils/ipc/generators/mojom_libcamera_generator.py). It might be difficult to grasp at first. It might be useful to compare the generated code (in $BUILDDIR/include/libcamera/ipa/) with the templates. Reading the documentation (Documentation/guides/ipa.rst) would help as well.
    • Studying the generated IPADataSerializers to learn how IPADataSerializer works would be useful as well.
  • Practice using Spans. You can study the Span unit test for this, or writing experimentation code.
  • How would you plumb in Spans and use them to replace vectors? Or, how would you restructure the serialization to be in the mojo archive format instead of the current custom format? (Depends on the chosen goal.) This will form the design of your project.

Adding UVC hardware timestamp support

Description of the project:

libcamera reports information in each frame that is output from the camera; one of these is the timestamp. At the moment, this timestamp is the one that is sampled by the Linux kernel when it completes the buffer. However, for cameras compatible with the USB video class (which are virtually all USB webcams on the market today), we can get get more accurate timestamps than this by using information from the UVC packet headers, which has hardware timestamps and other clock information. This information is already reported via the V4L2 metadata API.

The goal would be to capture this information in the UVC pipeline handler and calculate more accurate timestamps. There is a reference implementation and information in the kernel, but due to kernel limitations it can’t use floating point calculations so it’s hard to improve on.

A UVC webcam is required for this project.

Expected results:
More accurate timestamps for completed frames
Confirmed mentor:
Paul Elder
Desirable skills:
Good knowledge of C/C++. Some knowledge of V4L2 would also be beneficial.
Expected size:
175 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera on a platform with a USB webcam
  • Run cam
  • Study how you would get the information from the UVC headers via the V4L2 metadata API.
  • Study the conversion function mentioned in the project description
  • How would you plumb the UVC header information through from V4L2 to the UVC pipeline handler, and how would you implement the timestamp conversion functions?

OMAP3 ISP pipeline handler

Description of the project:

Add support for OMAP3 devices to libcamera.

An N900 running mainline linux will be provided for development. Otherwise if you already have an OMAP3 device running mainline Linux at least version 5.4, you may use that. Some examples of OMAP3 devices are:

  • BeagleBoard-xM (with a suitable camera module)
  • Motorola Droid/Milestone
  • Motorola Droid 2
  • Motorola Droid X
  • Nokia N9
  • Nokia N900
  • Nokia N950
Expected results:
libcamera is usable on the OMAP3 device
Confirmed mentor:
Paul Elder, Laurent Pinchart
Desirable skills:
Good knowledge of C++.
Expected size:
350 hour
Difficulty rating:
Medium

Warmup tasks

  • Build libcamera for the OMAP3 device of your choice (ie. prepare the development and testing environment).
  • Study the pipeline handler writer’s guide (Documentation/guides/pipeline-handler.rst). Actually following its steps and creating the pipeline handler for vivid might be useful in understading the process. Studying how other complex pipeline handlers (ipu3, rkisp1, raspberrypi) work might be useful as well.
  • Study how the OMAP3 ISP works. Setting up the pipeline with media-ctl and capturing with yavta might help.
  • How would you design the OMAP3 pipeline handler? This will serve as the roadmap for your implementation.
  • For the purposes of practical hands-on learning with a pipeline handler, add a control to the UVC pipeline handler. You can see what controls are supported by the V4L2 device with v4l2-ctl -d /dev/video0 --list-ctrls, and in libcamera with cam -c 1 -I --list-controls. See src/libcamera/control_ids.yaml for a list of controls that libcamera has (or you could add your own, if there is one supported by the V4L2 device but libcamera doesn’t have).

Other warmup tasks