Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add data provider #69

Merged
merged 55 commits into from
Jan 31, 2025
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
9f4ffab
add polling uniswap and random number source
Jan 10, 2025
3cbe5cf
add fake alchemy api key
Jan 10, 2025
9ec15ef
update comment
Jan 10, 2025
4109b19
cleanup
Jan 10, 2025
6427494
remove sample configs
Jan 10, 2025
a43db35
respond to comments
Jan 11, 2025
da60e79
rename init -> factory
Jan 11, 2025
1c0526d
source packages have basically the same file names
Jan 11, 2025
6d3b2fc
respond to comments
Jan 14, 2025
98df4ed
add github action to check codegen
Jan 14, 2025
16f32c5
bump python version
Jan 14, 2025
5ae8797
test codegen check
Jan 14, 2025
ca42582
Revert "test codegen check"
Jan 14, 2025
a0fc5d1
simplify docker compose
Jan 14, 2025
428d118
use references in config schema
Jan 14, 2025
ee266d0
include configs repo
Jan 14, 2025
6da3817
fix tests
Jan 14, 2025
e1ace31
include data source id in each config object
Jan 14, 2025
5363606
cleanup, enforce uniqueness of value ids at runtime
Jan 15, 2025
9db1452
uniswap_v2 -> uniswapv2
Jan 15, 2025
7ffa341
add docs for writing a new data source, configuring the data provider…
Jan 15, 2025
6b87f30
bump version
Jan 15, 2025
7821db2
add to the apps readme
Jan 15, 2025
243b8e8
add dataSource as required field in schema
Jan 16, 2025
40e0753
fix valid output url
akawalsky Jan 21, 2025
bb45b4e
CLI Codegen for Data Provider Sources
ACK101101 Jan 22, 2025
2e69c5e
refactor data provider code generation
ACK101101 Jan 22, 2025
9b4677e
added updated command and animation command
ACK101101 Jan 22, 2025
e2a32bd
set animation to run at startup
ACK101101 Jan 22, 2025
a80b409
ci fix? make command to install cli and animation only for certain co…
ACK101101 Jan 23, 2025
867a8fa
move cli commands before rust to prevent github CI error
ACK101101 Jan 23, 2025
bdc9f22
reverted makefile change
ACK101101 Jan 23, 2025
76eb6ce
ran update
ACK101101 Jan 23, 2025
3d89e56
updated autogen comment, fixed pascal to camel conversion for acronym…
ACK101101 Jan 24, 2025
c3e8fe7
removed animation code and frames, stored in branch alexander/stork-d…
ACK101101 Jan 27, 2025
a6a08bb
fixed capitalization in data_source template
ACK101101 Jan 27, 2025
e66472b
Add RaydiumCLMM as Data Source Using CLI Tool Via Helius
ACK101101 Jan 28, 2025
45c3808
cleanup commented out lines
ACK101101 Jan 28, 2025
e2ec424
Merge pull request #82 from Stork-Oracle/alexander/sto-669-add-raydiu…
ACK101101 Jan 29, 2025
b59433c
wrapped start cli command with new make command
ACK101101 Jan 29, 2025
03ba118
Merge pull request #78 from Stork-Oracle/alexander/sto-646-codegen-fo…
ACK101101 Jan 29, 2025
810ac35
QOL Improvements to Data Provider
ACK101101 Jan 29, 2025
c23be06
separate generate from data_provider
akawalsky Jan 29, 2025
9ad0f41
fix generate command
akawalsky Jan 29, 2025
1639c40
remove unused flag
akawalsky Jan 29, 2025
a2f70db
added animation, fixed make targets, and ci paths
ACK101101 Jan 29, 2025
5220784
another fix for ci
ACK101101 Jan 29, 2025
d48814a
merge with sto-691-qol-improvements-to-data-provider
ACK101101 Jan 29, 2025
dedb935
comment out remove for now
ACK101101 Jan 29, 2025
e105b0f
Merge pull request #85 from Stork-Oracle/separate-generate
ACK101101 Jan 29, 2025
155d315
use different provider urls so api keys are no longer required, added…
ACK101101 Jan 30, 2025
dd2db49
fixed config_test and make ci names consistent
ACK101101 Jan 30, 2025
1c45644
fixed typo and shortened readme
ACK101101 Jan 30, 2025
77a2473
Merge pull request #84 from Stork-Oracle/alexander/sto-691-qol-improv…
ACK101101 Jan 30, 2025
9731058
Add data provider integration test (#79)
harryrackmil Jan 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions apps/lib/data_provider/command.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
package data_provider

import (
"fmt"
"time"

"github.com/rs/zerolog"
"github.com/rs/zerolog/pkgerrors"
"github.com/spf13/cobra"
)

var DataProviderCmd = &cobra.Command{
Use: "start",
Short: "Start a process to fetch prices from data sources",
RunE: runDataProvider,
}

// required
const ConfigFilePathFlag = "config-file-path"
const WebsocketUrl = "ws-url"

func init() {
DataProviderCmd.Flags().StringP(ConfigFilePathFlag, "c", "", "the path of your config json file")
DataProviderCmd.Flags().StringP(WebsocketUrl, "w", "", "the websocket url to write updates to")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be configured in the config json if we wanted, but it felt like more of a run time configuration (I might want to test without a websocket url at first to just look at the prices)

Copy link
Contributor

@akawalsky akawalsky Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of making this a bit more generic - something like -o where the output could eventually take the form of

  1. ws interface
  2. http interface
  3. a file buffer interface

which could be denoted by ws(s)://, http(s)://, file://, s3:// etc

Only ws should need to be supported right now though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense - added a Writer interface and a GetWriter function where we can do some branching in the future. Currently it will fail if the --output-address isn't either blank or prefixed with ws://


DataProviderCmd.MarkFlagRequired(ConfigFilePathFlag)
}

func runDataProvider(cmd *cobra.Command, args []string) error {
configFilePath, _ := cmd.Flags().GetString(ConfigFilePathFlag)
wsUrl, _ := cmd.Flags().GetString(WebsocketUrl)

mainLogger := mainLogger()

zerolog.TimeFieldFormat = time.RFC3339Nano
zerolog.DurationFieldUnit = time.Nanosecond
zerolog.ErrorStackMarshaler = pkgerrors.MarshalStack

mainLogger.Info().Msg("Starting data provider")

config, err := loadConfig(configFilePath)
if err != nil {
return fmt.Errorf("error loading config: %v", err)
}

runner := NewDataProviderRunner(*config, wsUrl)
runner.Run()

return nil
}
20 changes: 20 additions & 0 deletions apps/lib/data_provider/config.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package data_provider

import (
"encoding/json"
"fmt"
"os"
)

func loadConfig(configPath string) (*DataProviderConfig, error) {
configBytes, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read config file: %v", err)
}

var config DataProviderConfig
if err := json.Unmarshal(configBytes, &config); err != nil {
return nil, fmt.Errorf("failed to unmarshal config file: %v", err)
}
return &config, nil
}
31 changes: 31 additions & 0 deletions apps/lib/data_provider/data_source.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
package data_provider

type dataSource interface {
// Add all value updates to updatesCh
Run(updatesCh chan DataSourceUpdateMap)
GetDataSourceId() DataSourceId
}

func buildDataSources(config DataProviderConfig) []dataSource {
// group by data source id to support batched feeds
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overkill for our current sources (right now every dataSource is 1:1 with a value) but we might want to support batching of feeds in the future

sourceConfigsByDataSource := make(map[DataSourceId][]DataProviderSourceConfig)
for _, sourceConfig := range config.Sources {
dataSourceId := sourceConfig.DataSourceId
if _, ok := sourceConfigsByDataSource[dataSourceId]; !ok {
sourceConfigsByDataSource[dataSourceId] = make([]DataProviderSourceConfig, 0)
}
sourceConfigsByDataSource[dataSourceId] = append(sourceConfigsByDataSource[dataSourceId], sourceConfig)

}

// initialize data sources
allDataSources := make([]dataSource, 0)
for dataSourceId, sourceConfigs := range sourceConfigsByDataSource {
dataSourceBuilder := GetDataSourceBuilder(dataSourceId)
dataSources := dataSourceBuilder(sourceConfigs)

allDataSources = append(allDataSources, dataSources...)
}

return allDataSources
}
12 changes: 12 additions & 0 deletions apps/lib/data_provider/data_source_registry.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
package data_provider

func GetDataSourceBuilder(dataSourceId DataSourceId) func([]DataProviderSourceConfig) []dataSource {
switch dataSourceId {
case UniswapV2DataSourceId:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This switch statement is the only shared file that needs to change when we write a new integration

return getUniswapV2DataSources
case RandomDataSourceId:
return getRandomDataSource
default:
panic("unknown data source id " + dataSourceId)
}
}
22 changes: 22 additions & 0 deletions apps/lib/data_provider/logger.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
package data_provider

import (
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)

func baseAppLogger() zerolog.Logger {
return log.With().Str("application", "stork-data-provider").Logger()
}

func mainLogger() zerolog.Logger {
return baseAppLogger().With().Str("service", "main").Logger()
}

func writerLogger() zerolog.Logger {
return baseAppLogger().With().Str("service", "writer").Logger()
}

func dataSourceLogger(dataSourceId DataSourceId) zerolog.Logger {
return baseAppLogger().With().Str("service", "data_source").Str("data_source_id", string(dataSourceId)).Logger()
}
40 changes: 40 additions & 0 deletions apps/lib/data_provider/model.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
package data_provider

import (
"time"
)

type (
DataSourceId string
ValueId string

DataProviderSourceConfig struct {
Id ValueId `json:"id"`
DataSourceId DataSourceId `json:"dataSource"`
Config any `json:"config"`
}

DataProviderConfig struct {
Sources []DataProviderSourceConfig `json:"sources,omitempty"`
}

DataSourceValueUpdate struct {
ValueId ValueId
DataSourceId DataSourceId
Timestamp time.Time
Value float64
}

DataSourceUpdateMap map[ValueId]DataSourceValueUpdate

ValueUpdate struct {
PublishTimestamp int64 `json:"t"`
ValueId ValueId `json:"a"`
Value string `json:"v"`
}

ValueUpdateWebsocketMessage struct {
Type string `json:"type"`
Data []ValueUpdate `json:"data"`
}
)
71 changes: 71 additions & 0 deletions apps/lib/data_provider/random.go
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a simpler example than uniswap. Still using the scheduled data source concept

Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
package data_provider

import (
"math/rand"
"time"

"github.com/mitchellh/mapstructure"
)

const RandomDataSourceId = "RANDOM_NUMBER"

type randomConfig struct {
UpdateFrequency string `json:"updateFrequency"`
MinValue float64 `json:"minValue"`
MaxValue float64 `json:"maxValue"`
}

type randomConnector struct {
valueId ValueId
config randomConfig
updateFrequency time.Duration
}

func newRandomConnector(sourceConfig DataProviderSourceConfig) *randomConnector {
var randomConfig randomConfig
mapstructure.Decode(sourceConfig.Config, &randomConfig)

updateFrequency, err := time.ParseDuration(randomConfig.UpdateFrequency)
if err != nil {
panic("unable to parse update frequency: " + randomConfig.UpdateFrequency)
}

return &randomConnector{
valueId: sourceConfig.Id,
config: randomConfig,
updateFrequency: updateFrequency,
}
}

func (r *randomConnector) GetUpdate() (DataSourceUpdateMap, error) {
randValue := r.config.MinValue + rand.Float64()*(r.config.MaxValue-r.config.MinValue)

updateMap := DataSourceUpdateMap{
r.valueId: DataSourceValueUpdate{
ValueId: r.valueId,
DataSourceId: r.GetDataSourceId(),
Timestamp: time.Now(),
Value: randValue,
},
}

return updateMap, nil
}

func (r *randomConnector) GetUpdateFrequency() time.Duration {
return r.updateFrequency
}

func (r *randomConnector) GetDataSourceId() DataSourceId {
return RandomDataSourceId
}

func getRandomDataSource(sourceConfigs []DataProviderSourceConfig) []dataSource {
dataSources := make([]dataSource, 0)
for _, sourceConfig := range sourceConfigs {
connector := newRandomConnector(sourceConfig)
dataSource := newScheduledDataSource(connector)
dataSources = append(dataSources, dataSource)
}
return dataSources
}
Loading
Loading