Taiwan
In recent years, people have become increasingly interested in multidisciplinary and interdisciplinary human-centered AI, which includes integrated design and cross-domain connection capabilities, as well as reinforcement learning (RL) method attracts the most attention. But it cannot be denied that in the research of innovative reinforcement learning, it may be possible to consider in-depth research from the direction of human-inspired to promote further development of artificial intelligence. For example, research on human-centered artificial intelligence in reinforcement learning may consider starting from the direction of human-in-the-loop to achieve rapid modeling effects. Therefore, in this study, we focus on leveraging deep Bayesian knowledge and analysis strategies to make meta-reinforcement learning (meta-RL) agents based on the Markov Decision Process (MDP). Based on its architecture, it shows better adaptability (Adaptation) and generalization (Generalization) capabilities. By introducing a novel meta-reinforcement learning framework called Human-Inspired Meta-RL (HMRL), we enable agents to perform high-level tasks through the behavioral inspiration of HMRL in experiments. Flexible decision-making uses the knowledge and predictions of deep Bayesian neural networks to maximize the acquisition of dynamic rewards (Rewards) generated in diverse environments. This framework enables agents to find common modeling methods during training and prevents them from making major mistakes in training. Experimental results also show that our method significantly benefits the agent’s ability to enhance its adaptability and reduce computational costs. By integrating the HMRL framework, we can make reinforcement learning more robust and scalable, leading to more significant goals in the broader field of applications for artificial intelligence in the future.