MaLGa logoMaLGa black extendedMaLGa white extendedUniGe ¦ MaLGaUniGe ¦ MaLGaUniversita di Genova | MaLGaUniversita di GenovaUniGe ¦ EcoSystemics

Open data & software

  • LCSL

    CRiSP for Misspecified Robot Model

    Gian Maria Marconi, Raffaello Camoriano, Lorenzo Rosasco, and Carlo Ciliberto

    Code accompanying the paper “Structured Prediction for CRiSP Inverse Kinematics Learning with Misspecified Robot Models”

  • LCSL

    Iterreg: Iterative Regularization For Linear Models

    C. Molinari, M. Massias, L. Rosasco and S. Villa

    Iterreg is a scikit-learn compatible python package to perform iterative regularization of linear models. It implements the algorithm of “Iterative regularization for convex regularizers” by C. Molinari, M. Massias, L. Rosasco and S. Villa, AISTATS 2021.

  • LCSL

    Falkon 2.0: Scaling Nyström KRR to Billions of Points

    Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi

    Python package implementing the Falkon algorithm for large-scale, approximate kernel ridge regression. The implementation is based on PyTorch and runs on CPU and GPU.

  • LCSL

    Batch-BKB: Near-linear time, no regret Bayesian optimization

    Calandriello Daniele, Luigi Carratino, Alessandro Lazaric, Michal Valko and Lorenzo Rosasco

    Python code implementing Batch-BKB: the first Bayesian optimization (a.k.a. Gaussian process or bandit optimization) algorithm that is both provably no-regret and guaranteed to run in near-linear time time.

  • LCSL

    DPP-VFX: Very Fast and eXact DPP sampler

    Michał Dereziński, Calandriello Daniele, and Michal Valko

    Material related to the NeurIPS 2019 paper “Exact sampling of determinantal point processes with sublinear time preprocessing” by Michał Dereziński, Calandriello Daniele, and Michal Valko.

  • LCSL

    BLESS: Bottom-up leverage score sampling

    Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco

    Python code implementing the ridge leverage score sampling algorithm BLESS presented in: On Fast Leverage Score Sampling and Optimal Learning (NIPS 2018). The implementation can exploit both GPU and CPU resources.

  • LCSL

    FALKON: An Optimal Large Scale Kernel Method

    Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco\n

    This Matlab code provides an implementation of the FALKON algorithm presented in ‘FALKON: An Optimal Large Scale Kernel Method’ NIPS 2017, together with scripts to reproduce the experiments in the paper.

  • LCSL

    iCubWorld Dataset for Visual Recognition in Robotics

    Giulia Pasquale, Carlo Ciliberto, Sean R. Fanello, Lorenzo Natale, Francesca Odone et al.

    The iCub World Dataset is an ongoing project in collaboration with the iCub Facility and the SLIP-GURU group at the University of Genoa. The goal of the project is to build a growing dataset for visual recognition in robotics. The dataset was acquired during Human-Robot Interaction sessions, where a human teacher showed different everyday objects to the iCub robot.

  • LCSL

    MultiplePassesSGM: Multiple passes stochastic gradient method

    Junhong Lin, Raffaello Camoriano and Lorenzo Rosasco

    This is the experimental code used in ‘Generalization Properties and Implicit Regularization for Multiple Passes SGM’, appearing in ICML http://jmlr.org/proceedings/papers/v48/lina16.pdf.

  • LCSL

    NystromCoRe: Nyström Computational Regularization

    Raffaello Camoriano, Alessandro Rudi and Lorenzo Rosasco

    This Matlab package provides an implementation of the Nyström Computational Regularization algorithm presented in the following work: Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco, ‘Less is More: Nyström Computational Regularization’, 16 Jul 2015, http://arxiv.org/abs/1507.04717.

  • LCSL

    NYTRO: NYström iTerative RegularizatiOn

    Tomas Angles*, Raffaello Camoriano*, Alessandro Rudi and Lorenzo Rosasco

    This Matlab package provides an implementation of the NYTRO algorithm presented in the following work: Tomas Angles, Raffaello Camoriano, Alessandro Rudi, Lorenzo Rosasco, ‘NYTRO: When Subsampling Meets Early Stopping’, 19 Oct 2015, http://arxiv.org/abs/1510.In NYTRO, we combine early stopping and subsampling ideas, proposing a form of randomized iterative regularization based on early stopping and subsampling. In this way, we overcome the memory bottle neck of exact Early Stopping algorithms such as the kernelized Landweber iteration. Moreover, NYTRO can also be faster than other subsampled algorithms, such as Nyström Kernel Regularized Least Squares (NKRLS), especially when a stopping rule is used.

  • Research
    LCSL

    GURLS Grand Unified Regularized Least Squares

    Andrea Tacchetti, Pavan K Mallapragada, Matteo Santoro and Lorenzo Rosasco

    GURLS is a least squares, modular, easy-to-extend software library for efficient supervised learning. GURLS is targeted to machine learning practitioners, as well as non-specialists. It offers a number state-of-the-art training strategies for medium and large-scale learning, and routines for efficient model selection. The library is particularly well suited for multi-output problems (multi-category/multi-label).

  • LCSL

    Scikit-Tensor

    Maximilian Nickel

    Scikit-tensor is a Python module for multilinear algebra and tensor factorizations.