Decision-making in partially observable environments is one of the key challenges in robotic autonomy. In this talk, I will address the continuous-state POMDP (Partially Observable Markov Decision Process) problem using a reduced-order approximation method. We develop a belief abstraction algorithm that allocates a fixed number of symbols representing the belief state, and ensures the parameterization of the belief dynamics does not grow exponentially as the system dimension increases. Based on the Mori-Zwanzig formalism, a novel learning algorithm is developed to improve accuracy of the reduced-order belief dynamics. We prove that incorporating the learning algorithm leads to a time-uniform model reduction error bound. We consider marine autonomy as an example of POMDP, and use numerical simulations to show that the proposed algorithm supports real-time decision-making of underwater vehicles in unknown environment, with lower model reduction error.