Add a Camera Sensor to Autonomous Self-Driving Car with CARLA

In the last tutorial, we created the blueprint of a Tesla Model 3 as our autonomous self-driving car and spawned it in the CARLA environment using a random location as the spawn point.
We also added some basic controls to the autonomous self-driving car.
In this tutorial, we will add a sensor to the car and get data from the sensor.
Recall that sensors are also actors in the CARLA environment. This means that it will be added to the list of actors and cleaned up afterward.
Also, since we intend to apply the concept of reinforcement learning in training our autonomous self-driving car, we will need two types of sensor data: the data from the forward-facing camera and then the data from driving sense and errors like accidents, change of lanes, traffic signals, illegal turns, etc.
With this in mind, let's start working on the camera sensors and figure out how to access the data.
How to Use a Camera Sensor in CARLA With Open CV
For starters, the forward-facing camera placed on the hood of the car would be our primary sensor. In the long run, we can incorporate other sensors as well, but a forward-facing camera on the hood seems to be a good place to start.
You can learn more about the various types of sensors available in CARLA and how to work with them.
For now, we will use a camera sensor for our autonomous self-driving car.
Still working on the same file autonomous-car-tutorial.py
.
We will start by importing the Open CV package (installed in the second tutorial) that allows us to work with the camera module of our device.
To do this, add the following code at the top of our program just below the last import we made in the previous tutorial.
import cv2
Next, we will declare a couple of constants at the top of our program, below the Open CV import.
The code below creates a constant width and height for our camera display window:
IMG_WIDTH = 640
IMG_HEIGHT = 480
Now, we will load in the blueprint for the sensor and set some of the attributes:
# get the blueprint for this sensor
blueprint = blueprint_library.find('sensor.camera.rgb')
# change the dimensions of the image
blueprint.set_attribute('image_size_x', f'{IMG_WIDTH}')
blueprint.set_attribute('image_size_y', f'{IMG_HEIGHT}')
blueprint.set_attribute('fov', '110')
Next, we need to add this sensor to our car.
First, we will adjust the sensor from a relative position before attaching it to the car.
# Adjust sensor relative to vehicle
spawn_point = carla.Transform(carla.Location(x=2.5, z=0.7))
# spawn the sensor and attach it to the vehicle.
sensor = world.spawn_actor(blueprint, spawn_point, attach_to=vehicle)
In the code above, we adjusted the sensor to sit on a specific position relative to the hood of the autonomous self-driving car by assigning specific values to the x and z location, afterwards, we attached it to the vehicle, added it to the world, and assigned it all to the sensor
variable.
Next, we add this sensor to our list of actors using the code below:
# add the sensor to list of actors
actor_list.append(sensor)
Lastly, we want to listen and pinpoint the image data with this sensor, and then, use the data.
The function below specifically works on the data from each image captured on the camera sensor and returns a normalized result.
- Python Code
def decode_img(image):
raw_image = np.array(image.raw_data)
image_shape = raw_image.reshape((IMG_HEIGHT, IMG_WIDTH, 4))
rgb_value = image_shape[:, :, :3]
cv2.imshow("", rgb_value)
cv2.waitKey(1)
return rgb_value/255.0 # normalize
To use the data gotten from the sensor, we will make use of a lambda function:
sensor.listen(lambda data: decode_img(data))
In the code above, we are going to take the data from the sensor, and pass it through the decode_img()
function created above.
With this, we can compile the complete code we have so far:
- Python Code
import glob
import os
import sys
try:
sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
sys.version_info.major,
sys.version_info.minor,
'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
pass
import carla
import random
import time
import numpy as np
import cv2
IMG_WIDTH = 640
IMG_HEIGHT = 480
def decode_img(image):
raw_image = np.array(image.raw_data)
image_shape = raw_image.reshape((IMG_HEIGHT, IMG_WIDTH, 4))
rgb_value = image_shape[:, :, :3]
cv2.imshow("", rgb_value)
cv2.waitKey(1)
return rgb_value/255.0
actor_list = []
try:
client = carla.Client('localhost', 2000)
client.set_timeout(2.0)
world = client.get_world()
blueprint_library = world.get_blueprint_library()
tesla_model3 = blueprint_library.filter('model3')[0]
print(tesla_model3)
spawn_point = random.choice(world.get_map().get_spawn_points())
vehicle = world.spawn_actor(tesla_model3, spawn_point)
control_vehicle = carla.VehicleControl(throttle=1.0, steer=0.0)
vehicle.apply_control(control_vehicle)
actor_list.append(vehicle)
# get the blueprint for this sensor
blueprint = blueprint_library.find('sensor.camera.rgb')
# change the dimensions of the image
blueprint.set_attribute('image_size_x', f'{IMG_WIDTH}')
blueprint.set_attribute('image_size_y', f'{IMG_HEIGHT}')
blueprint.set_attribute('fov', '110')
# Adjust sensor relative to vehicle
spawn_point = carla.Transform(carla.Location(x=2.5, z=0.7))
# spawn the sensor and attach it to the vehicle.
sensor = world.spawn_actor(blueprint, spawn_point, attach_to=vehicle)
# add sensor to list of actors
actor_list.append(sensor)
# do something with this sensor
sensor.listen(lambda data: decode_img(data))
# sleep for 10 seconds, then finish:
time.sleep(10)
finally:
print('Cleaning up actors...')
for actor in actor_list:
actor.destroy()
print('Done, Actors cleaned-up successfully!')
We can run the program we have so far.
This will pop up a new window showing the camera sensor along with the window showing the CARLA server environment of our autonomous self-driving car.
Wrap Off
If you completed this tutorial series to this point, give yourself a well-deserved applause.
In the next tutorial, you will dive into the world of artificial intelligence, and learn how to use reinforcement learning to train our autonomous self-driving car.
If you run into errors or unable to complete this tutorial, feel free to contact us anytime, and we will instantly resolve it. You can also request clarification, download this tutorial as pdf or report bugs using the buttons below.