Hello, I have a CanMV-K230-V1.1 board and I am currently using the k230_linux_sdk image on it. My goal is to adapt the OV7251 sensor, and eventually connect at least 2 of these sensors at the same time. I have started researching how to do this, but there are a lot of questions:
- I have followed the instructions for building the linux sdk image from source, and it works with no changes. Now looking at k230-canmv.dts (in the https://github.com/ruyisdk/linux-xuantie-kernel), I see that I2C0 and I2C1 are disabled, as well as mipi1 and mipi2. What is the best way to enable these? Should I make a new DTS file and add it to BR2_LINUX_KERNEL_CUSTOM_DTS_PATH?
- I started writing the OV7251 vvcam driver (based on the OV5647 driver, using these instructions), but I see that the open_i2c() function has /dev/i2c-0 hard-coded. How should the driver be written if it is expected to use more than one camera, with different I2C port on each one?
- I see that there is an "ISP tuning" process needed to generate 3 config files for vvcam. Reading your guide, it is asking to acquire a raw image for the calibration. What is the process for capturing a raw image from the camera sensor? Is it possible to use vvcam without the configuration files first to get raw images, and then generate the tuning after?
- What is the best SDK for performance of running an AI model on multiple camera streams? Would the k230_sdk (rt-smart on big, linux on little core) have better performance for running AI on the big core with no extra linux overhead? It also looks like maybe the sensor driver for rt-smart would be easier to implement.