Optimization and Complexity. Compared to the original maximization of the log marginal likelihood of the GP (Eq. (2.2)), we now have to optimize the inducing input locations in addition to the hyper parameters of the covariance and likelihood: {Zˆ, θˆ} = arg max L2} . L2 ∈ O( ) Thus, we can optimize the sparse GP bound with respect to the positions of the inducing inputs Z RM×Q in NM 2 . This computation is dominated by the computation of KFUK–1 KUF, where the inversion of the inducing inputs covari- ance matrix K–1 can be computed in O(M 3). This is dominated by the product be- 1 . This results in the overall complexity of this variant of the sparse GP algorithm to be O(NM 2).
Appears in 1 contract
Sources: Thesis Deposit Agreement
Optimization and Complexity. Compared to the original maximization of the log marginal likelihood of the GP (Eq. (2.2)), we now have to optimize the inducing input locations in addition to the hyper parameters of the covariance and likelihood: {Zˆ, θˆ} = arg max L2} . L2 ∈ O( ) Thus, we can optimize the sparse GP bound with respect to the positions of the inducing inputs Z RM×Q in NM 2 . This computation is dominated by the computation of KFUK–1 KFUK−1 KUF, where the inversion of the inducing inputs covari- ance matrix K–1 K−1 can be computed in O(M 3). This is dominated by the product be-
1 . This results in the overall complexity of this variant of the sparse GP algorithm to be O(NM 2).
Appears in 1 contract
Sources: Thesis Deposit Agreement