... | @@ -7,7 +7,7 @@ The application of the method goes through these step: |
... | @@ -7,7 +7,7 @@ The application of the method goes through these step: |
|
where ***P*** is a vector listing the property of interest (here, the RS - ZB difference in energy) for all data points (here, binary materials), ***D*** is a matrix whose columns are the *derived features* listed for each material, ***c*** is the (sparse) vector of coefficients that is found upon minimization, lambda is the regularization parameter that determines the level of sparsity (number of non zero elements) of ***c***, and the subscript 1 stays for L1 (also known as Manhattan) norm, i.e., differently from the usual Euclidean norm (L2 norm), the sum of the absolute values of the elements of the argument, that is a vector.
|
|
where ***P*** is a vector listing the property of interest (here, the RS - ZB difference in energy) for all data points (here, binary materials), ***D*** is a matrix whose columns are the *derived features* listed for each material, ***c*** is the (sparse) vector of coefficients that is found upon minimization, lambda is the regularization parameter that determines the level of sparsity (number of non zero elements) of ***c***, and the subscript 1 stays for L1 (also known as Manhattan) norm, i.e., differently from the usual Euclidean norm (L2 norm), the sum of the absolute values of the elements of the argument, that is a vector.
|
|
The regularization parameter is decreased in small steps starting from the largest value that gives one non-zero element in ***c***, and 50 distinct features that have non-zero coefficient in ***c*** are collected.
|
|
The regularization parameter is decreased in small steps starting from the largest value that gives one non-zero element in ***c***, and 50 distinct features that have non-zero coefficient in ***c*** are collected.
|
|
* A L0 optimization is performed, formally written as:
|
|
* A L0 optimization is performed, formally written as:
|
|
|
|
![L0formula](/uploads/a5b051f90312cb6d73136c956ad39442/L0formula.png)
|
|
where the subscript 0 stays for the L0 quasinorm, that counts the number of non-zero elements of the argument, and ***D'*** is the matrix whose columns are the 50 columns selected from ***D*** from the previous step. In practice all singletons, pairs, triplets, ... *n*-tuples (up to the selected maximum dimension of the descriptor) are listed and for each set a linear least-square regression (LLSR) is performed. The *n*-tuple that gives the lowest mean square error for the LLSR fit is selected as the resulting *n*-dimensional descriptor.
|
|
where the subscript 0 stays for the L0 quasinorm, that counts the number of non-zero elements of the argument, and ***D'*** is the matrix whose columns are the 50 columns selected from ***D*** from the previous step, and ***c'*** is the solution of this new minimization. In practice all singletons, pairs, triplets, ... *n*-tuples (up to the selected maximum dimension of the descriptor) are listed and for each set a linear least-square regression (LLSR) is performed. The *n*-tuple that gives the lowest mean square error for the LLSR fit is selected as the resulting *n*-dimensional descriptor.
|
|
|
|
|
|
[Back to analytics wiki home](./Home) |
|
[Back to analytics wiki home](./Home) |
|
|
|
\ No newline at end of file |