1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

REQUEST hyperion source from cam in front of tv - prepare input region

Discussion in 'Feature Requests' started by giovanne, 2 January 2018.

  1. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    Flovie, I was talking to a coworker about a similar concept on the way back from a tech conference this past Friday. I agree with most of your pros/cons.

    Since I'm not familiar with the "proto" approach, I'd think I'm going to explore this V4L2 virtual camera in combination concept with the OpenCV fisheye undistort. I'm completely unsure of the performance impacts to the video stream. I'll play a bit with it, but I haven't purchased a true fisheye lens camera yet.

    Regarding "proto", are you talking about protocol buffer? Can you provide a hyperlink please?
     
  2. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    I explored the V4L2 virtual camera in combination concept with the OpenCV fisheye undistort options. While I'm not 100% certain, they both look to be written for more of a still image scenario. Digging a bit more, I ran into GStreamer which seems to have some promise. It does look like it exposes a dewarp function based on opencv. I've run out of patience for today, and remain unconvinced I'll spend more time digging into this.

    Interesting:
    Those install instructions didn't work for me, and I gave up. The raspivid command looks like it's piping video through gst-launch-1.0. That gst-launch smells like it has a way to construct a pipeline...maybe that dewarp can be used in some obtuse gst-launch command to construct a "pipeline" to surface a dewarped video stream originating from the camera without constructing a special gstreamer plugin or writing code. I'm only 5% certain of that statement.

    Other examples:
     
  3. Flovie

    Flovie New Member

    Messages:
    5
    Hardware:
    RPi3
    I am not sure if gstreamer is even required. Because eventually you need only to calculate a dewarped image and send this via protobuffer to the hyperion instance, so you don't need to transform the image back to a virtual camera. Here is an example by the hyperion wiki how to work with the protobuffer:

    https://hyperion-project.org/wiki/Protobuffer-Java-client-example

    However, it is based on Java. A few years ago, I worked on a music visualizer with hyperion. It was also written in Java and used the protobuffer to control the ambilight. It is still working pretty good. I will consider to publish the code, maybe it helps to understand the protobuffer.
     
  4. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    Yeah, that example is pretty simplistic in that it doesn't really process a "real" image or video stream. As I don't fully understand the hyperion API, the documentation and source code aren't exactly clear if/who is responsible for obtaining an image for processing. I'd have no idea on how to handle the dewarping math and would have to rely on a library such as opencv. This StackOverflow post seems to delve into some of this and issues with dewarping a true 180 fisheye into a usable image. This was fun to look into but I'm running out of motivation due to many potential roadblocks.