Skip to content

openvolley/opensportml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

opensportml

lifecycle

Installation

## install.packages("remotes")
remotes::install_github("openvolley/opensportml")

The opensportml package provides image and video machine learning tools for sports analytics. Many of its functions are re-exported from the openvolley/ovml and openvolley/ovideo packages, which provide similar functionality but specifically for volleyball.

Currently two versions of the YOLO object detection algorithm are included. These have been implemented on top of the torch R package, meaning that no Python installation is required on your system.

Example

Use a YOLOv4 network to recognize objects in an image. We use an example image bundled with the package:

library(opensportml)
img <- os_example_image()
ovml_ggplot(img)

Construct the network. The first time this function is run, it will download and cache the network weights file (~250MB).

dn <- ovml_yolo()

Now we can use the network to detect objects in our image:

dets <- ovml_yolo_detect(dn, img)
ovml_ggplot(img, dets)

We can transform the image detections to real-world court coordinates. First we need to define the court reference points needed for the transformation. We can use the os_shiny_court_ref helper app for this:

ref <- os_shiny_court_ref(img)

ref should look something like:

ref
#> $video_width
#> [1] 1024
#> 
#> $video_height
#> [1] 768
#> 
#> $court_ref
#> # A tibble: 4 x 4
#>   image_x image_y court_x court_y
#>     <dbl>   <dbl>   <dbl>   <dbl>
#> 1  0.0256   0.386    12.5      46
#> 2  0.283    0.117   100         0
#> 3  0.867    0.475    87.5     154
#> 4  0.582    0.626     0       200

Now use it with the ov_transform_points function (note that currently this function expects the image coordinates to be normalized with respect to the image width and height):

court_xy <- ov_transform_points(x = (dets$xmin + dets$xmax)/2/ref$video_width, y = dets$ymin/ref$video_height,
                                ref = ref$court_ref, direction = "to_court")
dets <- cbind(dets, court_xy)

And plot it:

library(ggplot2)
ggplot(dets, aes(x, y)) + 
    os_ggcourt(line_colour = "white") + geom_point(colour = "blue", size = 3) +
    ggplot2::theme(panel.background = ggplot2::element_rect(fill = "#95a264"))

Keep in mind that ov_transform_points is using the middle-bottom of each bounding box and transforming it assuming that this represents a point on the court surface (the floor). Locations associated with truncated object boxes, or objects not on the court surface (a tennis racket in a player’s hand, players jumping, people in elevated positions such as the referee’s stand) will appear further away from the camera than they actually are.

About

Machine Learning Tools for Sports Analytics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages