Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update MJPEGCamera #29

Merged
merged 1 commit into from
Jan 17, 2025
Merged

update MJPEGCamera #29

merged 1 commit into from
Jan 17, 2025

Conversation

walesch-yan
Copy link
Collaborator

This PR updates the MJPEG camera by introducing some improvements to its general usage:

  • The image data will be send to the streamer in form of a raw rgb24 image, similar to how the data is stored for the other cameras and equal to the expected format for the ffmpeg (MPEG1) streamer. It involves the drop of the specific get_jpeg function from this class, it now uses the same as the parent class. This would introduce two additional transformation operation when the MJPEG streamer is used, however using the MJPEG streamer with the MJPEG Camera might already be unnecessary.
  • Using a bytearray object instead of a bytes object for the buffer to save some memory allocation overheat
  • The images in the stream are detected and saved to data based on the boundary defined in the response header compared to the previously StreamConsumedError (which is not present on every stream)
  • Added a request exception handler to exit the while loop if there would be an unexpected exception from the request.
  • Integrated Authentication on remote streams through set flags in the command line execution (Currently available are Digest and Basic authentication)

Where possible, this PR also enforces to use the correct types by using casting and correct type hints in the whole codebase

Comment on lines +39 to +57
#### Authentication for MJPEG Streams

Some MJPEG streams may require authentication to access. To support such scenarios, the `MJPEGCamera` class includes built-in authentication support. Currently, both `Basic` and `Digest` authentication methods are supported.

Below is an example of how to use the video-streamer to access a stream requiring `Basic` authentication:

```bash
video-streamer -of MPEG1 -uri <stream_url> -auth Basic -user <username> -pass <password>
```

##### Explanation of the Parameters:
- `-of`: Specifies the ouput format, here `MPEG1` is used.
- `-uri`: The URL of the MJPEG stream.
- `-auth`: Specifies the authentication method (`Basic` or `Digest`)
- `-user`: The username for authentication
- `-pass`: The password required for authentication

Replace `<stream_url>`, `<username>` and `<password>` with the appropriate values for your stream. Ensure you handle credentials securely and avoid exposing them in public or shared scripts!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I added some documentation for the newly added authentication feature:)

Comment on lines +83 to +91
def _image_to_rgb24(self, image: bytes) -> bytearray:
"""
Convert binary image data into raw RGB24-encoded byte array
Supported image types include JPEG, PNG, BMP, TIFF, GIF, ...
"""
image_array = np.frombuffer(image, dtype=np.uint8)
frame = cv2.imdecode(image_array, cv2.IMREAD_COLOR)
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
return bytearray(rgb_frame.tobytes())
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another addition is this function in the Camera base class, which allows the transformation of any image type supported by opencv into the desired rgb24 format

Comment on lines +100 to +126
def _set_size(self) -> None:
buffer = bytearray()
# To set the size, extract the first image from the MJPEG stream
try:
response = requests.get(self._device_uri, stream=True, verify=False, auth=self._authentication)
if response.status_code == 200:
boundary = self._extract_boundary(response.headers)
if not boundary:
logging.error("Boundary not found in Content-Type header.")
return

for chunk in response.iter_content(chunk_size=8192):
buffer.extend(chunk)

while True:
frame, buffer = self._extract_frame(buffer, boundary)
if frame is None:
break
image = Image.open(io.BytesIO(frame))
self._width, self._height = image.size
return
else:
logging.error(f"Received unexpected status code {response.status_code}")
return
except requests.RequestException as e:
logging.exception(f"Exception occured during stream request")
return
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first consumed image from the stream is used to set the size from the input source. This ensures correct encoding/decoding in the streamer later on

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

Comment on lines +138 to +145
def _extract_boundary(self, headers):
"""
Extract the boundary marker from the Content-Type header.
"""
content_type = headers.get("Content-Type", "")
if "boundary=" in content_type:
return content_type.split("boundary=")[-1]
return None
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The boundary extracted here acts as a delimiter between different images in an MJPEG stream, from my understanding, this delimiter could be different depending on the stream, but should always be present in the response header, hence this extraction function

@marcus-oscarsson
Copy link
Member

Very nice, thanks alot !

@marcus-oscarsson marcus-oscarsson merged commit 52fe98f into main Jan 17, 2025
@walesch-yan walesch-yan deleted the yw-update-mjpeg-camera branch January 17, 2025 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants