Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

recording #90

Open
manninb opened this issue May 30, 2023 · 20 comments
Open

recording #90

manninb opened this issue May 30, 2023 · 20 comments

Comments

@manninb
Copy link

manninb commented May 30, 2023

Hi, tried recording using the command line flags but was unable to (see below) can you let me know where I went wrong?

pi@raspberrypi:~ $ workon owl
(owl) pi@raspberrypi:~ $ ./owl.py --recording
bash: ./owl.py: No such file or directory
(owl) pi@raspberrypi:~ $ ./owl.py [--recording]
bash: ./owl.py: No such file or directory
(owl) pi@raspberrypi:~ $ ./owl.py[--recording]
bash: ./owl.py[--recording]: No such file or directory
(owl) pi@raspberrypi:~ $ ./owl.py[--recording]
bash: ./owl.py[--recording]: No such file or directory
(owl) pi@raspberrypi:~ $ ./owl.py[--recording]
bash: ./owl.py[--recording]: No such file or directory
(owl) pi@raspberrypi:~ $ ./owl.py [--recording]
bash: ./owl.py: No such file or directory
(owl) pi@raspberrypi:~ $

also could you explain the difference between working in
1 pi@raspberrypi:~ $
2 (owl) pi@raspberrypi:~ $ cd
3 (owl) pi@raspberrypi:/owl $

if this question makes sense.

@geezacoleman
Copy link
Owner

geezacoleman commented May 30, 2023

When you type workon owl you're telling the system to change to the owl virtual environment. It's a way of keeping bits of software in one spot. You can have lots of different virtual environments to manage different software packages separately from each other. In this case we've called it owl.

The cd command just means 'change directory' - it's the same as opening up a folder by double clicking on it.

So here (owl) pi@raspberrypi:/owl $ you are both working in the owl virtual environment and also working in the owl directory/folder.

Your issue above (owl) pi@raspberrypi:~ $ ./owl.py --recording is just because you ran owl.py from outside the owl folder so there's no file for it to actually run. So just do cd owl and then ./owl.py. The ./ in front of owl.py is just to tell it to execute the file. You don't need to use the square brackets around [--recording], that's just from the help message when you want to know what to type. Just use --recording.

@geezacoleman
Copy link
Owner

With the --recording feature, you will also need to make sure you have some button or switch attached to the pins as well to start/stop the recording

If you want to capture whole images, I would recommend using the sampleMethod variable and changing it to 'whole'. You'll need to open the owl.py file and scroll to the very bottom and you should see it there under owl.hoot()

@geezacoleman
Copy link
Owner

How did you go with recording/image capture @manninb ? If you have any suggestions, let me know and we can try improving the process.

@manninb
Copy link
Author

manninb commented Jun 14, 2023

I have not yet tried to record sorry, can you tell which numbered pins I would attach the switch to?

@geezacoleman
Copy link
Owner

If you want to record videos, it is a bit more involved. At the moment it is set to Pin 37 in the owl.py script. This can be changed of course. You'll also need to hook up the other side of the switch to a ground pin on the Pi, which is conveniently adjacent to it on Pin 39. A bit more detail on the process here.

image

If it is image data you want to collect though, it is much easier to just set the sampleMethod to 'whole' and save the images to the SD card or USB drive. I would recommend this option for training data collection.

@manninb
Copy link
Author

manninb commented Jun 29, 2023

I had a look a changing the sample method but was not able to get the following text up

start the targeting!

owl.hoot(sprayDur=0.15,
         delay=0,
         sampleMethod=None,
         sampleFreq=60,
         saveDir='/home/pi/owl-images',
         algorithm=args.algorithm,
         selectorEnabled=False,
         camera_name='hsv',
         minArea=10)

I got the other variables to display ( see below)

!/home/pi/.virtualenvs/owl/bin/python3
from algorithms import exg, exg_standardised, exg_standardised_hue, hsv, exgr, gndvi, maxg
from imutils import grab_contours
import numpy as np
import cv2

class GreenOnBrown:
def init(self, algorithm='exg', label_file='models/labels.txt'):
self.algorithm = algorithm

def inference(self,
              image,
              exgMin=30,
              exgMax=250,
              hueMin=30,
              hueMax=90,
              brightnessMin=5,
              brightnessMax=200,
              saturationMin=30,
              saturationMax=255,
              minArea=1,
              show_display=False,
              algorithm='exg'):
    '''
    Uses a provided algorithm and contour detection to determine green objects in the image. Min and Max
    thresholds are provided.
    :param image: input image to be analysed
    :param exgMin: minimum exG threshold value
    :param exgMax: maximum exG threshold value
    :param hueMin: minimum hue threshold value
    :param hueMax: maximum hue threshold value
    :param brightnessMin: minimum brightness threshold value
    :param brightnessMax: maximum brightness threshold value
    :param saturationMin: minimum saturation threshold value
    :param saturationMax: maximum saturation threshold value
    :param minArea: minimum area for the detection - used to filter out small detections
    :param show_display: True: show windows; False: operates in headless mode
    :param algorithm: the algorithm to use. Defaults to ExG if not correct
    :return: returns the contours, bounding boxes, centroids and the image on which the boxes have been drawn
    '''

    # different algorithm options, add in your algorithm here if you make a new one!
    threshedAlready = False
    if algorithm == 'exg':
        output = exg(image)

    elif algorithm == 'exgr':
        output = exgr(image)

    elif algorithm == 'maxg':
        output = maxg(image)

    elif algorithm == 'nexg':
        output = exg_standardised(image)

    elif algorithm == 'exhsv':
        output = exg_standardised_hue(image, hueMin=hueMin, hueMax=hueMax,
                                      brightnessMin=brightnessMin, brightnessMax=brightnessMax,
                                      saturationMin=saturationMin, saturationMax=saturationMax)

    elif algorithm == 'hsv':
        output, threshedAlready = hsv(image, hueMin=hueMin, hueMax=hueMax,
                                      brightnessMin=brightnessMin, brightnessMax=brightnessMax,
                                      saturationMin=saturationMin, saturationMax=saturationMax)

    elif algorithm == 'gndvi':
        output = gndvi(image)

    else:
        output = exg(image)
        print('[WARNING] DEFAULTED TO EXG')

    # run the thresholds provided
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
    self.weedCenters = []
    self.boxes = []

    # if not a binary image, run an adaptive threshold on the area that fits within the thresholded bounds.
    if not threshedAlready:
        output = np.where(output > exgMin, output, 0)
        output = np.where(output > exgMax, 0, output)
        output = np.uint8(np.abs(output))
        if show_display:
            cv2.imshow("HSV Threshold on ExG", output)

        thresholdOut = cv2.adaptiveThreshold(output, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 2)
        thresholdOut = cv2.morphologyEx(thresholdOut, cv2.MORPH_CLOSE, kernel, iterations=1)

    # if already binary, run morphological operations to remove any noise
    if threshedAlready:
        thresholdOut = cv2.morphologyEx(output, cv2.MORPH_CLOSE, kernel, iterations=5)

    if show_display:
        cv2.imshow("Binary Threshold", thresholdOut)

    # find all the contours on the binary images
    self.cnts = cv2.findContours(thresholdOut.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    self.cnts = grab_contours(self.cnts)

    # loop over all the detected contours and calculate the centres and bounding boxes
    for c in self.cnts:
        # filter based on total area of contour
        if cv2.contourArea(c) > minArea:
            # calculate the min bounding box
            startX, startY, boxW, boxH = cv2.boundingRect(c)
            endX = startX + boxW
            endY = startY + boxH

            label = 'WEED'
            cv2.putText(image, label, (startX, startY + 30), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 0), 2)
            cv2.rectangle(image, (int(startX), int(startY)), (endX, endY), (0, 0, 255), 2)

            # save the bounding box
            self.boxes.append([startX, startY, boxW, boxH])
            # compute box center
            centerX = int(startX + (boxW / 2))
            centerY = int(startY + (boxH / 2))
            self.weedCenters.append([centerX, centerY])

    # returns the contours, bounding boxes, centroids and the image on which the boxes have been drawn
    return self.cnts, self.boxes, self.weedCenters, image

Can you tell me where I went wrong?

@geezacoleman
Copy link
Owner

Could you provide a bit more info on what you did to get this text up? You just need navigate to the owl.py script, double click to open it and then select the option to Open File instead of Execute.

Once it's open, then just scroll all the way to the bottom and change the sampleMethod variable.

@manninb
Copy link
Author

manninb commented Jul 6, 2023

I followed the instructions and got some the text up as shown above but was not able to find anything that enabled me to change sample method? Is there something very simple I'm missing?

@geezacoleman
Copy link
Owner

Which steps did you follow? You don't need to do anything in the command line, just open up the owl.py file as you would a word document/any file on a computer and scroll to the bottom. It looks like you may have opened the greenonbrown.py file instead.

@manninb
Copy link
Author

manninb commented Jul 17, 2023

Sorry, yes I was in greenonbrown.py. What would be the path I would enter to save onto USB?

@geezacoleman
Copy link
Owner

USBs will usually mount in the /media drive, so to access the drive set the path to /media/name-of-your-usb. If you want to double check this, launch the OWL with a screen attached, open up a folder and click on the USB drive mounted. Go to the address bar at the top and copy the path there.

Sometimes there may be permissions issues - so just double check this works by running the script before going to collect data.

image

@manninb
Copy link
Author

manninb commented Dec 11, 2023

Hi, opened up home/pi/owl owl.py and found the variable SampleMethod. If I change it to "whole" the unit stops working when you restart it, and starts working if I change it back to "none". No images were recorded on USB. I notice when I type in "whole" the text does not become coloured as it does when I type in "None". Is it not recognising the input?

I presume the owl should still detect and trigger once the changes have been made.

I'm running a recent version of the software.

Any suggestions welcome.

@geezacoleman
Copy link
Owner

Hi Bill,

The ability to record images has been substantially improved with the latest software. Would recommend upgrading, though you will need to download the latest Raspberry Pi operating system if you're still using the old one.

@manninb
Copy link
Author

manninb commented May 13, 2024 via email

@geezacoleman
Copy link
Owner

It looks like that approach will only use sudo apt-get update and sudo apt-get upgrade which won't upgrade the OS version. So you'll run into issues running the older version potentially.

What errors do you get running owl.py at the moment? It may not start on boot - I just realised I didn't fix that for the older software versions.

But the best approach is to reflash the SD card from scratch and install the software that way unfortunately! But it should only take 30mins to 1hr. I am planning on releasing the ready-to-go image for the OWL like before.

@geezacoleman
Copy link
Owner

Also did you receive the OWL HAT in the mail? I posted it about a month ago so I hope it's arrived.

@manninb
Copy link
Author

manninb commented May 13, 2024 via email

@geezacoleman
Copy link
Owner

I haven't updated that image yet, so you'll need to start by reflashing the the SD card (as described here: https://github.com/geezacoleman/OpenWeedLocator?tab=readme-ov-file#step-1a---rasperry-pi-os). Then just installing the software as described below.

That's a shame - I can't seem to find the tracking info unfortunately.

@manninb
Copy link
Author

manninb commented May 14, 2024 via email

@manninb
Copy link
Author

manninb commented May 14, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants