rcpptimer
. This
simplifies the code and makes use of the API of rcpptimer
1.2.0 which is expected to be stable.online()
objects to deploy online learning algorithms in production.conline
C++ class now exposes weights
to R.conline
C++ class was
added.conline
C++ class. These functions are:
init_experts_list()
, make_basis_mats
and
make_hat_mats
online()
was simplified a bit by utilizing
the new init_experts_list()
function.post_process_model()
was improved and is now
exposed to be used in conjunction with the conline
C++
class.online()
outputs now include
predictions_got_sorted
. A matrix which indicates whether
quantile crossing occured and predictions have been sorted.tidy()
methods were added to convert
weights
, predictions
and loss objects of
online()
output to a tibble (for further analysis, plotting
etc.)autoplot()
method. In consequence, ggplot2 became a new dependency of this
package.online()
.profoc
now depends on R >= 4.3.0
to
ensure C++17 support.ncp < 0
.penalty()
function which works with equidistant and
non-equidistant knots.online()
saves memory by not reporting
past_performance
and past_predictions_grid
.
However, the cumulative performance and the most recent predictions
w.r.t to the parameter grid are always included in the output. The
former is used internally for choosing the best hyperparameter set, and
the latter for updating the weights. Depending on the data and the
parameter space considered, both objects may get large. You can still
opt-in to include them in the output by setting
save_past_performance = TRUE
and
save_past_predictions_grid = TRUE
in
online()
.online()
to reduce
memory usage.online()
is able to sample from grids of up to
2^64-1 rows.sample_int()
works
similar to sample.int()
and also respects seeds set by
set.seed()
.parametergrids
lets you provide custom grids of
parameters in online()
online()
when using large grids of parametersforget_past_performance
had no effect in
online()
online
can now be used with multivariate data
y
and a TxDxPxK array as
experts
summary.online
can be used to obtain selected
parameters of online
modelsonline
uses Rcpp Modules to bundle data and
functionality into an exposed C++ classinitial_weights
argument is replaced by
init
init
takes a named list and currently
intial_weights
and R0
the initial weights and
the initial cumulative regret can be provided. They have to be PxK or
1xK.profoc
function was extended:
regret
can now be passed as an array as before, or as a
list, e.g. list(regret = regret_array, share = 0.2)
if the
provided regret should be mixed with the regret calculated by
online.loss
can also be provided as a list, see above.batch
function can now minimize an alternative
objection function using the quantile weighted CRPS
qw_crps=TRUE
profoc
function was renamed to online
for consistency.batch
function to apply batch-learning.oracle
function to approximate the oracle.online
and batch
objects.The spline functions where rewritten to add the ability of using a non-equidistant knot sequence and a penalty term defined on the Sobolev space. This change induces breaking changes to small parts of the API.
ndiff
defines the degree of differencing for creating
the penalty term. For values between 1 and 2 a weighted sum of the
difference penalization matrices is used.rel_nseg
is replaced by knot_distance
(
distance between knots). Defaults to 0.025, which corresponds to the
grid steps when knot_distance_power = 1 (the default).knot_distance_power
defines if knots
are uniformly distributed. Defaults to 1, which corresponds to the
equidistant case. Values less than 1 create more knots in the center,
while values above 1 concentrate more knots in the tails.allow_quantile_crossing
defines if
quantile crossing is allowed. Defaults to false, which means that
predictions will be sorted.package:::function
notation.y
must now be a matrix of either \(\text{T} \times 1\) or \(\text{T} \times \text{P}\).trace
specifies whether a progress bar will be printed
or not. Default to TRUE
.loss_function
lets you now specify “quantile”,
“expectile” or “percentage”. All functions are generalized as in Gneitling 2009. The power can
be scaled by loss_parameter
. The latter defaults to 1,
which leads to the well-known quantile, squared, and absolute percentage
loss.gradient
lets you specify whether the learning
algorithm should consider actual loss or a linearized version using the
gradient of the loss. Defaults to TRUE
(gradient-based
learning).forget_performance
was added. It defines the share of
the past performance that will be ignored when selecting the best
parameter combination.forget
parameter to forget_regret
to underline its reference to the regret.init_weights
parameter. It has to be either a Kx1
or KxP matrix specifying the experts’ starting weights.lead_time
parameter. offset for expert forecasts.
Defaults to 0 which means that experts predict t+1 at t. Setting this to
h means experts’ predictions refer to t+1+h at time t. The weight
updates delay accordingly.tau
is now optional. It defaults to 1:P/(P+1). A scalar
given to tau will be repeated P times. The latter is useful in
multivariate settings.pinball_loss
and loss_pred
functions
were replaced by a more flexible function called loss
.weights
object is changed from a \((\text{T}+1 \times \text{K} \times
\text{P})\) array to a \((\text{T}+1
\times \text{P} \times \text{K})\) array to match other objects’
dimensions. Now the following indexing scheme is consistent throughout
the package: (Time, Probabilities, Experts, Parameter combination)