Skip to content

Commit

Permalink
Merge pull request #1769 from PrincetonUniversity/devel
Browse files Browse the repository at this point in the history
Devel
  • Loading branch information
dillontsmith authored Oct 6, 2020
2 parents 40726d5 + 91f8f8a commit e415857
Show file tree
Hide file tree
Showing 36 changed files with 522 additions and 407 deletions.
10 changes: 10 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,13 @@ updates:
include: "scope"
labels:
- "CI"

- package-ecosystem: "pip"
directory: "/" # use top dir
schedule:
interval: "daily"
target-branch: "devel"
commit-message:
prefix: "requirements"
labels:
- "deps"
6 changes: 3 additions & 3 deletions .github/workflows/pnl-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
restore-keys: ${{ runner.os }}-python-${{ matrix.python-version }}-${{ matrix.python-architecture }}-pip-wheels

- name: Set up Python ${{ matrix.python-version }}
uses: actions/[email protected].2
uses: actions/[email protected].3
with:
python-version: ${{ matrix.python-version }}
architecture: ${{ matrix.python-architecture }}
Expand Down Expand Up @@ -102,7 +102,7 @@ jobs:
run: pytest --junit-xml=tests_out.xml --verbosity=0 -n auto --maxprocesses=2

- name: Upload test results
uses: actions/upload-artifact@v2.1.4
uses: actions/upload-artifact@v2.2.0
with:
name: test-results-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
path: tests_out.xml
Expand All @@ -114,7 +114,7 @@ jobs:
python setup.py sdist bdist_wheel
if: contains(github.ref, 'tags')
- name: Upload dist packages
uses: actions/upload-artifact@v2.1.4
uses: actions/upload-artifact@v2.2.0
with:
name: dist-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
path: dist/
Expand Down
26 changes: 12 additions & 14 deletions dev_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,14 +1,12 @@
ipykernel
ipython
jupyter
psyneulink-sphinx-theme
pytest
pytest-benchmark
pytest-cov
pytest-helpers-namespace
pytest-profiling
pytest-pycodestyle
pytest-pydocstyle
pytest-xdist
sphinx
sphinx_autodoc_typehints
jupyter<=1.0.0
psyneulink-sphinx-theme<=1.2.1.7
pytest<6.1.1
pytest-benchmark<=3.2.3
pytest-cov<=2.10.1
pytest-helpers-namespace<=2019.1.8
pytest-profiling<=1.7.0
pytest-pycodestyle<=2.2.0
pytest-pydocstyle<=2.2.0
pytest-xdist<=2.1.0
sphinx<=3.2.1
sphinx_autodoc_typehints<=1.11.0
24 changes: 16 additions & 8 deletions docs/source/BasicsAndPrimer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -619,7 +619,7 @@ Example use-cases for dot notation
>>> comp1.run(inputs={m:1})
>>> # returns: [array([1.])]
>>> # set slope of m1's function to 2 for the most recent context (which is now comp1)
>>> m.function.slope = 2
>>> m.function.slope.base = 2
>>> comp1.run(inputs={m:1})
>>> # returns: [array([2.])]
>>> # note that changing the slope of m's function produced a different result
Expand All @@ -640,7 +640,7 @@ Example use-cases for dot notation
>>> m.execute([1])
>>> # returns: [array([1.])]
>>> # executing m outside of a Composition uses its default context
>>> m.function.slope = 2
>>> m.function.slope.base = 2
>>> m.execute([1])
>>> # returns: [array([2.])]
>>> comp1.run(inputs={m:1})
Expand Down Expand Up @@ -711,7 +711,7 @@ parameters. For example, the ``output`` Mechanism was assigned the `Logistic` `
specification. For example, the current value of the `gain <Logistic.gain>` parameter of the ``output``\'s Logistic
Function can be accessed in either of the following ways::

>>> output.function.gain
>>> output.function.gain.base
1.0
>>> output.function.parameters.gain.get()
1.0
Expand All @@ -731,12 +731,20 @@ complete description of modulation. The current *modulated* value of a paramete
<ParameterPort.value>` of the corresponding ParameterPort. For instance, the print statement in the example above
used ``task.parameter_ports[GAIN].value`` to report the modulated value of the `gain <Logistic.gain>` parameter of
the ``task`` Mechanism's `Logistic` function when the simulation was run. For convenience, it is also possible to
access the value of a modulable parameter by adding the prefix ``mod_`` to the name of the parameter; this returns
the `value <ParameterPort.value>` of the ParameterPort for the parameter::
access the value of a modulable parameter via dot notation. Dot notation for modulable parameters is slightly different
than for non-modulable parameters to provide easy access to both base and modulated values::

>>> task.function.gain
(Logistic Logistic Function-5):
gain.base: 1.0
gain.modulated: [0.55]

Instead of just returning a value, the dot notation returns an object with `base` and `modulated` attributes.
`modulated` refers to the `value <ParameterPort.value>` of the ParameterPort for the parameter::

>>> task.parameter_ports[GAIN].value
[0.62]
>>> task.mod_gain
>>> task.gain.modulated
[0.62]

This works for any modulable parameters of the Mechanism, its
Expand All @@ -745,8 +753,8 @@ here, neither the ``parameters`` nor the ``function`` attributes of the Mechanis
Note also that, as explained above, the value returned is different from the base value of the function's gain
parameter::

>>> task.function.gain
[1.0]
>>> task.function.gain.base
1.0

This is because when the Compoistion was run, the ``control`` Mechanism modulated the value of the gain parameter.

Expand Down
80 changes: 76 additions & 4 deletions psyneulink/core/components/component.py
Original file line number Diff line number Diff line change
Expand Up @@ -626,12 +626,29 @@ def __init__(self, message, component=None):
super().__init__(message)


def make_parameter_property(name):
def _get_parametervalue_attr(param):
return f'_{param.name}'


def make_parameter_property(param):
def getter(self):
return getattr(self.parameters, name)._get(self.most_recent_context)
p = getattr(self.parameters, param.name)

if p.modulable:
return getattr(self, _get_parametervalue_attr(p))
else:
return p._get(self.most_recent_context)

def setter(self, value):
getattr(self.parameters, name)._set(value, self.most_recent_context)
p = getattr(self.parameters, param.name)
if p.modulable:
warnings.warn(
'Setting parameter values directly using dot notation'
' may be removed in a future release. It is replaced with,'
f' for example, <object>.{param.name}.base = {value}',
FutureWarning,
)
getattr(self.parameters, p.name)._set(value, self.most_recent_context)

return property(getter).setter(setter)

Expand Down Expand Up @@ -666,7 +683,7 @@ def __init__(self, *args, **kwargs):

for param in self.parameters:
if not hasattr(self, param.name):
setattr(self, param.name, make_parameter_property(param.name))
setattr(self, param.name, make_parameter_property(param))

try:
if param.default_value.owner is None:
Expand Down Expand Up @@ -2030,6 +2047,10 @@ def _is_user_specified(parameter):
if isinstance(p.default_value, Function):
p.default_value.owner = p

for p in self.parameters:
if p.stateful:
setattr(self, _get_parametervalue_attr(p), ParameterValue(self, p))

def _instantiate_parameter_classes(self, context=None):
"""
An optional method that will take any Parameter values in
Expand Down Expand Up @@ -3620,6 +3641,12 @@ def make_property_mod(param_name, parameter_port_name=None):
parameter_port_name = param_name

def getter(self):
warnings.warn(
f'Getting modulated parameter values with <object>.mod_<param_name>'
' may be removed in a future release. It is replaced with,'
f' for example, <object>.{param_name}.modulated',
FutureWarning
)
try:
return self._parameter_ports[parameter_port_name].value
except TypeError:
Expand Down Expand Up @@ -3647,3 +3674,48 @@ def getter(self, context=None):
.format(self.name, param_name))

return getter


class ParameterValue:
def __init__(self, owner, parameter):
self._owner = owner
self._parameter = parameter

def __repr__(self):
return f'{self._owner}:\n\t{self._parameter.name}.base: {self.base}\n\t{self._parameter.name}.modulated: {self.modulated}'

@property
def modulated(self):
try:
is_modulated = (self._parameter in self._owner.parameter_ports)
except AttributeError:
is_modulated = False

try:
is_modulated = is_modulated or (self._parameter in self._owner.owner.parameter_ports)
except AttributeError:
pass

if is_modulated:
return self._owner._get_current_parameter_value(
self._parameter,
self._owner.most_recent_context
)
else:
warnings.warn(f'{self._parameter.name} is not currently modulated.')
return None

@modulated.setter
def modulated(self, value):
raise ComponentError(
f"Cannot set {self._owner.name}'s modulated {self._parameter.name}"
' value directly because it is computed by the ParameterPort.'
)

@property
def base(self):
return self._parameter._get(self._owner.most_recent_context)

@base.setter
def base(self, value):
self._parameter._set(value, self._owner.most_recent_context)
2 changes: 1 addition & 1 deletion psyneulink/core/components/functions/objectivefunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1000,7 +1000,7 @@ def _gen_llvm_function_body(self, ctx, builder, params, _, arg_in, arg_out, *, t
elif self.metric == MAX_ABS_DIFF:
del kwargs['acc']
max_diff_ptr = builder.alloca(ctx.float_ty)
builder.store(ctx.float_ty("NaN"), max_diff_ptr)
builder.store(ctx.float_ty(float("NaN")), max_diff_ptr)
kwargs['max_diff_ptr'] = max_diff_ptr
inner = functools.partial(self.__gen_llvm_max_diff, **kwargs)
elif self.metric == CORRELATION:
Expand Down
8 changes: 3 additions & 5 deletions psyneulink/core/components/functions/optimizationfunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1516,7 +1516,7 @@ def _gen_llvm_function_body(self, ctx, builder, params, state, arg_in, arg_out,
# Use NaN here. fcmp_unordered below returns true if one of the
# operands is a NaN. This makes sure we always set min_*
# in the first iteration
builder.store(min_value_ptr.type.pointee("NaN"), min_value_ptr)
builder.store(min_value_ptr.type.pointee(float("NaN")), min_value_ptr)

b = builder
with contextlib.ExitStack() as stack:
Expand Down Expand Up @@ -1565,14 +1565,13 @@ def _gen_llvm_function_body(self, ctx, builder, params, state, arg_in, arg_out,
def _run_cuda_grid(self, ocm, variable, context):
assert ocm is ocm.agent_rep.controller
# Compiled evaluate expects the same variable as mech function
new_variable = [np.asfarray(ip.parameters.value.get(context))
for ip in ocm.input_ports]
new_variable = np.array(new_variable, dtype=np.object)
new_variable = [ip.parameters.value.get(context) for ip in ocm.input_ports]
# Map allocations to values
comp_exec = pnlvm.execution.CompExecution(ocm.agent_rep, [context.execution_id])
ct_alloc, ct_values = comp_exec.cuda_evaluate(new_variable,
self.search_space)

assert len(ct_values) == len(ct_alloc)
# Reduce array of values to min/max
# select_min params are:
# params, state, min_sample_ptr, sample_ptr, min_value_ptr, value_ptr, opt_count_ptr, count
Expand All @@ -1582,7 +1581,6 @@ def _run_cuda_grid(self, ocm, variable, context):
ct_opt_sample = bin_func.byref_arg_types[2](float("NaN"))
ct_opt_value = bin_func.byref_arg_types[4]()
ct_opt_count = bin_func.byref_arg_types[6](0)
assert len(ct_values) == len(ct_alloc)
ct_count = bin_func.c_func.argtypes[7](len(ct_alloc))

bin_func(ct_param, ct_state, ct_opt_sample, ct_alloc, ct_opt_value,
Expand Down
2 changes: 1 addition & 1 deletion psyneulink/core/components/functions/transferfunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -2357,7 +2357,7 @@ def __gen_llvm_apply(self, ctx, builder, params, _, arg_in, arg_out):
builder.store(exp_sum_ptr.type.pointee(0), exp_sum_ptr)

max_ptr = builder.alloca(ctx.float_ty)
builder.store(max_ptr.type.pointee('-inf'), max_ptr)
builder.store(max_ptr.type.pointee(float('-inf')), max_ptr)

max_ind_ptr = builder.alloca(ctx.int32_ty)
builder.store(max_ind_ptr.type.pointee(-1), max_ind_ptr)
Expand Down
19 changes: 13 additions & 6 deletions psyneulink/core/components/mechanisms/mechanism.py
Original file line number Diff line number Diff line change
Expand Up @@ -921,15 +921,15 @@ class `UserList <https://docs.python.org/3.6/library/collections.html?highlight=
of a parameter (as the key) and the value to assign to it, as in the following example::
>>> T = pnl.TransferMechanism(function=Linear)
>>> T.function.slope #doctest: +SKIP
>>> T.function.slope.base #doctest: +SKIP
1.0 # Default for slope
>>> T.clip #doctest: +SKIP
None # Default for clip is None
>>> T.execute(2.0,
... runtime_params={"slope": 3.0,
... "clip": (0,5)}) #doctest: +SKIP
array([[5.]]) # = 2 (input) * 3 (slope) = 6, but clipped at 5
>>> T.function.slope #doctest: +SKIP
>>> T.function.slope.base #doctest: +SKIP
1.0 # slope is restored 1.0
>>> T.clip #doctest: +SKIP
None # clip is restored to None
Expand All @@ -942,7 +942,7 @@ class `UserList <https://docs.python.org/3.6/library/collections.html?highlight=
If a parameter is assigned a new value before the execution, that value is restored after the execution; that is,
the parameter is assigned its previous value and *not* its default, as shown below::
>>> T.function.slope = 10
>>> T.function.slope.base = 10
>>> T.clip = (0,3)
>>> T.function.slope
10
Expand All @@ -952,7 +952,7 @@ class `UserList <https://docs.python.org/3.6/library/collections.html?highlight=
... runtime_params={"slope": 4.0,
... "clip": (0,4)}) #doctest: +SKIP
array([[4.]]) # = 3 (input) * 4 (slope) = 12, but clipped at 4
>>> T.function.slope #doctest: +SKIP
>>> T.function.slope.base #doctest: +SKIP
10 # slope is restored 10.0, its previously assigned value
>>> T.clip #doctest: +SKIP
(0, 3) # clip is restored to (0,3), its previously assigned value
Expand Down Expand Up @@ -2783,10 +2783,17 @@ def _get_output_struct_type(self, ctx):
def _get_input_struct_type(self, ctx):
# Extract the non-modulation portion of InputPort input struct
input_type_list = [ctx.get_input_struct_type(port).elements[0] for port in self.input_ports]


# Get modulatory inputs
mod_input_type_list = [ctx.get_output_struct_type(proj) for proj in self.mod_afferents]
if len(mod_input_type_list) > 0:
if len(self.mod_afferents) > 0:
mod_input_type_list = (ctx.get_output_struct_type(proj) for proj in self.mod_afferents)
input_type_list.append(pnlvm.ir.LiteralStructType(mod_input_type_list))
# Prefer an array type if there is no modulation.
# This is used to keep ctypes inputs as arrays instead of structs.
elif all(t == input_type_list[0] for t in input_type_list):
return pnlvm.ir.ArrayType(input_type_list[0], len(input_type_list))

return pnlvm.ir.LiteralStructType(input_type_list)

def _get_param_initializer(self, context):
Expand Down
Loading

0 comments on commit e415857

Please sign in to comment.