Our NASA SUITS Project will be improved through new features and functionality and improvements to previous interactions. Our team aims to alleviate stress and cognitive overload by using object recognition and an adaptive, intuitively designed heads-up display (HUD). These elements include visual cues, adaptive layouts, transparency for optimizing on-screen focus and field-of-view, headset networking for efficiently sharing information, gaze eye-tracking controls for easy interfacing, and navigation features mini-map of the user’s location. Overall, in all of our HUDs, high contrast bright colors against dark backgrounds are used throughout the HUD so that no matter what the lighting or background is for the user, it is easily readable and legible.
As shown in Figure 1, the application opens with the onboarding screen. This screen is primarily used for user testing and new user training, as astronauts will have previous training when utilizing the HUD during their missions.
A smart and practical vitals display, shown in Figure 2, will display important information in a discreet and customizable way. All vitals will be continuously tracked and can be viewed with a voice command. The astronaut will be able to choose which elements they want to “pin” to be omnipresent. Another customizable option for the vitals screen is a minimized view. This option allows the astronaut to prioritize the environment around them instead of the user interface details. For greater ease-of-use, astronauts see dynamic icons that change simultaneously with their vitals. The use of color is also utilized to efficiently display meaningful information, such as green for full/almost full, yellow for the middle range, and red for when vitals are low. These colors aid the astronaut in detecting important information, such as if oxygen levels are low.
We will use our server to get data pertaining to navigation. This information would include heading, elevation, bearing, ETA, points of interest, and PET (The counter of which would start on user command). This acquired data will be inserted into our interface and displayed to the user through intuitive displays, guiding the user to their destination with access to all the information they would need.
As shown in Figure 3, the design elements are minimal but informative on tasks needed to be completed. Green arrows will be displayed to show the quickest route for each objective. Clear icons will be used to show the astronauts what tools are necessary for each job.
Like the navigation solution, the science sampling task will obtain the necessary instructions from our web server. The display of which would be similar to any other procedure the user would interact with. Also, we will be adding the commands “Take Photo,” “Take Video,” “Record Voice” to allow the user to use the HoloLens to take videos, photos, and field notes during the sampling process.
As shown in Figure 4, objective markers are used to display the nearest and most important objectives. The shorter the stem of the objective, the closer it is. The larger the circle around the icon, the more critical the task to complete. Each objective is given an icon to resemble what tools are needed for the job. Instructions will be fed automatically to the user with more detailed information on the astronaut’s aim.
By utilizing each element, astronauts will be much safer and will have an easier time completing tasks. Our mission is to design a HUD with the well-being of the astronauts as its focus. We plan to accomplish our mission through the use of object recognition, adaptive and customizable UI layouts with immediate access to critical information.