Week Reviews

Progress summary for a specific week

Week 4 2026: Navigation

Accomplishments Street View Workflow Research Gallery View Update Refresh and Deeplinking Fix Book Now Button New Hotspot Icon and interaction Bonus: Click and Go 3D Hotspot off Fix Right Click to Drag Diorama Moving through a series of photos is a major feature that separates virtual tours from traditional photos. In virtual tours, you are spatially aware; you rotate your camera, click hotspots, or click a room directly from a diorama view. Compare this to traditional photo reels or galleries where images are viewed from only one angle, and navigation is limited to moving left or right. With this week’s improvements, navigating a virtual tour has grown more appealing, and sharing your perspective has become easier than before through deep linking. Google Street View Workflow Research Street View Nadir Blur Street View Backpack Street View Logo Multi-Floor Elevator Tool Blue Dots A popular tool with global understanding for virtual tours is Google Street View. While “Street” may be in the name, business interiors can also be uploaded to the platform. To consider offering this service, I first had to understand what goes into providing it. This research enlightened me to the complexities and challenges that come with generating and uploading 360 tours to Google’s Street View platform. First, let’s start with the outcomes. What can the tours look like? Google Street View offers three ways to display 360 panoramas. One is level-based, like the British Museum. Another is the familiar “Blue Line” tour that appears on streets and hiking trails, such as Mt. Wakakusa in Nara. Lastly, there are the “Blue Dots” which show a single image, like those you might find around Doi Suthep in Chiang Mai. Uploading individual images for the blue dot outcome is fairly simple, though the results offer only one perspective per dot. Blue Lines and connected tours are more complex to upload. Google offers two ways to upload tours for free. They have the Google Street View Studio, which can only make Blue Line Tours (no connected/elevator feature), and requires the upload to be in a video format. Google also offers many free APIs. These require precise steps to be manually programmed in order to upload content to the platform. Third parties have stepped in to assist and simplify calling the APIs, though this comes at a price. Today, platforms exist which charge a small fee per upload, or a higher fee for a one-time license. Beyond calling the APIs, these tools offer a visual user interface and features that help prepare the data to ensure the output will be as expected. When uploading to Google, it makes many of the decisions about your tour. For Blue lines, Google decides which photos to strip from the videos to use. For connected tours, Google decides where to place the hotspots based on GPS and compass data. The photos or video uploaded to Google must meet specific criteria in order to get a desired outcome. For instance, all photos must include GPS data. Each photo must be 3-5 meters apart (10-15 ft). Otherwise, the tour may end up as a variety of blue dots with no navigation hotspots at all. All photos must also include heading, or compass, data which identifies where north is. These requirements alone complicate much of the workflow. Many 360 cameras do not include GPS or compass abilities. Cell phones assist with the GPS data, though if the cell phone is not directly under the camera, the GPS data will be feet away, or with the photographer hidden in the next room. This is where manual effort must be made to correct the GPS data, and where many of the third-party tools find their usefulness. Setting the heading must also be done manually. While tools like the Insta360 GPS remote can offer compass heading, it only offers this feature in an overlay during video shoots, not photo mode. So, north directions must be manually added after the fact. For smaller buildings, this may not be too time-consuming, especially if the phone stays with the camera. If the camera is always facing the same direction, north can likely be set with one update via batch process. However, this process does not appear to scale for larger shoots. The phone’s GPS data is only so accurate, so quality assurance must always be done to ensure that rooms and hotspots appear in the correct location. Additionally, common compasses may encounter interference, or attempts to keep a camera in the same direction may diverge over dozens or hundreds of shots. This would be especially hard to manage if you had multiple photographers shooting simultaneously. Taking photos in open spaces comes with its own challenge of hiding the photographer. Homes and apartments are simple, as the camera person can step away into another room. At a park or in a warehouse, this may prove cumbersome and time-consuming. One solution is to take multiple photos with the person in different positions and perform a crop. This may be acceptable in some instances where the images can be far apart. It is less acceptable when you need to take every photo 10-15 ft apart. This greatly expands time on location and time editing. If an automated way to remove the photographer can be found in a single photo, this will no longer be a problem. I questioned how Google Street View has so successfully hidden the photographer from their hiking trail Blue Line tours. Their equipment appears to solve the time taken and hiding the photographer at the sake of quality. The Google team creating these tours appears to wear a backpack. Most likely, this captures images every few steps. This means someone with the backpack could take a leisurely hike with images snapping automatically along the way. Compare this to my proposed workflow of walking 10ft, taking two photos, then moving the camera again. Since the human is carrying the camera and is close to the lenses, they must be hidden automatically. The process appears to stretch

Week 4 2026: Navigation Read More »

Week 3 2026: Accessory Information

Accomplishments: Drone Hotspots Information iFrames Hotspots Brand Watermarks Bonus: Click to walk 360 Tour Click to walk 3D Tour 3D Cursor Workflows 360 Tours capture a moment in time for the viewer to visibly immerse themselves in. Information like touch, and historical significance, may not be easily conveyed by 360 photography alone. With accessory information, like branding, iframes, and drone views, the user can take their understanding beyond first person, with a birds eye view, and supplemental details. And when they are ready, they can now turn interest into action with the ability to book and purchase from within the tour. Drone Hotspots Drone icon Drone Shot Drone shots can provide views of a space unlike most have ever seen. Viewers can see the plot layout, the landscapes near and far, and so much more. What makes it so amazing is the same thing that makes it abnormal. Walking through a tour, you often click on a hotspot, and your expectation is to transition to that space on the floor. When you enter a drone hotspot, you will be clicking the sky, which is much different from the floor. How do you convey that? To showcase the difference in transition, I went through a variety of considerations. Birds, eyes, skylines, and more. None seemed to intuitively convey the transition in all cases. Not every drone shot will be near a city, eyes are already used as a button to show and hide, and birds don’t hover, they soar. I decided to create a simple icon for a quad-rotor drone. These have become a standard with drones, have grown in popularity, and appear unlikely to be mistaken as another feature. Information iFrames Hotspots Rose Kennedy Greenway Before and After Html iframe example Virtual tours excel at providing spatial information in an immersive way to the viewer. Sometimes it’s important to convey specific details about a space which aren’t easily conveyed visually. For instance, Boston’s Rose Kennedy Greenway includes a plaque. A tourist wouldn’t immediately be aware that the space used to be a concrete overpass. With a placard, they can learn about the history, including the time, effort, and frustration it took to change the place in such a way. With information hotspots, we can share any information directly within the tour. Information hotspots open a window within the tour. It works like an iFrame on other websites. iFrames let you put another webpage, within a window, or “frame”, in your webpage. This gives us the ability to make custom information cards with HTML, or link any existing webpage. This flexibility gives the feature the opportunity to satisfy many needs, from historical placards to space details like operation hours, and even opportunities to convert interest to action by linking conversion or purchase pages right into the tour. This feature was added into a tour by creating a new icon and hotspot style for the user to see. When the user interacts with this, it relies on a new iframe_popup plugin I created. This plugin takes control of the scene, blurring the background and disabling controls to bring focus to the iframe. The iframe can link to either a local folder with tour-specific HTML files, or separate webpages. Brand Watermarks Brand Watermark If a user is browsing multiple tours, or trying to remember where the current tour is from, some level of brand identity can be helpful. This reminds the user who the tour relates to and helps the business reinforce its presence. I added a new section to the tour display that allows a brand image to be included. This is customizable from the tour.xml and can be set to different levels of transparency to balance visibility with its effect on immersion. It can also be optionally hidden when the toolbar or hotspots are toggled. Bonus: Click to Walk 360 Tour When navigating a 360 tour, you may not always want to click directly on the hotspots. It can get frustrating to click close and not receive a response. This goes along with the recent feature of hiding hotspots. Repeatedly hiding and unhiding hotspots when moving between rooms can grow cumbersome. What if you could click in some direction and go there? I came across an existing plugin by LastRomantik which does just this for standard 360 tours. I tested this plugin against an example tour, and when enabled it does a good job at navigating to the nearest hotspot when the user double-clicks, regardless of the hotspot’s visibility. Bonus: Click to Walk 3D Tour When navigating a tour, 2D or 3D, it can be annoying when nothing happens when you click. Responsiveness is important for any user experience. How can we better navigate 3D tours and inform the user when there are no more locations in a certain direction? We can use a Click and Go workflow, similar to the one mentioned above. I began creating a custom plugin that works for 3D scenes. 3D scenes come with more challenges than their 2D alternatives. When the user double-clicks, it draws a line, or casts out a ray, from the camera through the mouse and the scene. It then determines which hotspots are closest to the ray. If the user double-clicks a wall or a ceiling, there may be a hotspot behind it. It’s unnatural to teleport through walls and ceilings, so these hotspots must be ignored. This proved surprisingly challenging. Through trial and error, I was unable to accurately test for hits of the ray and the 3D model in the scene. After stepping away, I considered other examples or features that may already solve this problem. I found an example where a 3D cursor is implemented in KRPANO. This is an image which follows your mouse while clamped onto the 3D model. After reviewing this logic, I was able to properly hit test the 3D model and begin ignoring hotspots that are not visible to the user. This provides a better experience with fewer breaks in immersion. I

Week 3 2026: Accessory Information Read More »

Week 2 2026: 3D Visualization for Multi-Floor Tours

Accomplishments: KRPano Multi-Floor Implementation Tested Retopology Workflow against Dense Multi-Floor Tour Bonus: Created Customizable Image Gallery Plugin Single-floor 3D visualizations are easy. You look down from the top and can see the entire floorplan. But buildings with exteriors or multiple floors introduce new challenges. This week, I began tackling questions and user experience (UX) challenges related to navigating and using a 3D visualization for a multi-floor building. Large buildings especially benefit from 3D visualization. If you want to see the third floor, it can be as simple as moving your eyes to that level and finding the room you want. The alternatives are walking there room by room in a 2D tour or navigating menus to find the correct floor and room. A successful multi-floor visualization requires both an efficient workflow for building the 3D model and an intuitive user experience for navigating and selecting parts of it. This week, I adjusted my 360 workflow to conform to the needs of multi-level spaces. The current process runs into some hardware limitations, and further improvements will be required to fully support these tasks. Began Implementing Multi-Floor Tour Features in KRPano Exterior On Floor 1 Selected Floor 2 Selected I wanted to ensure that the 360 tour tool I use could support the needs of multi-floor 3D visualizations. I created a simple two-floor example that included both floors, a building exterior, and a dome object representing the surrounding ground. The dome is always visible so that when someone views a floor, they can understand its position relative to the outside using landmarks. The exterior can be toggled on or off. This allows the viewer to see the building like a snow globe if they choose. Interior floors are selectable, and when one floor is selected, all other floors become transparent. This prevents upper floors from blocking the view when looking inside or selecting a hotspot for navigation. A key challenge was balancing the ability to select higher floors to change the floor of interest while also being able to ignore them when clicking on interior objects. The solution was to dynamically adjust the transparency and selectability of floors based on the viewer’s angle. Most navigation hotspots exist on horizontal surfaces and are more visible from higher angles, while floors themselves are clearer from lower, horizontal views. This proof of concept was successful. The 360 tour can manage all of these needs, and I believe this approach has strong potential for an intuitive user experience. Tested Workflow Against a Heavy Scan Phone Scans Phone Scans Density Remade Model Layout Remade Model Textured Remade Model Density I tested my 3D visualization workflow using the data from the Supalai Place 360 tour. This tour is very dense, with over 100 photos. If you are familiar with the layout, navigating through locations is quick. If not, learning the space through clicking can become cumbersome. A gallery view exists, showing a grid of images organized by floor and room, but even this becomes unwieldy with such a large number of photos and limited categorization. Creating the 3D model took considerable time, but it provided an opportunity to practice my 3D modeling skills. Using Blender, I merged connected vertices automatically and created clean topology, avoiding N-gons through loop cuts and other tools. Once the models for each floor were complete and textured using the 360 photos, I performed a process called baking. Baking extracts the relevant portions of each 360 image and merges them into a single texture map. While normally straightforward, baking more than 100 high-resolution 360 images proved extremely demanding on hardware. Although the JPG image folder was only 4GB, the system used 24GB of GPU memory and 64GB of system memory. This is because JPG files are compressed and expand significantly when loaded into memory. This exhausted system resources and caused Blender and Windows to crash. One positive takeaway was realizing the benefit of using JPG for final output instead of PNG. Transparency has not been critical, and switching to JPG can reduce bandwidth and hosting storage usage by roughly 85%. This process highlighted several lessons. Manual retopology by cutting and extruding floors is time-consuming and scales poorly with project density. Other tools, such as Polycam’s 3D floorplan mode, exist but struggle with elevation changes and curved spaces. I plan to research alternative approaches, such as blocking out layouts with simple primitives. For panorama importing and texturing, breaking the workflow into room-by-room steps may help. Applying textures incrementally and combining meshes per floor afterward should reduce memory usage and simplify the baking process. I also learned that a usable exterior mesh may not be possible without sufficient exterior coverage. Balcony photos were often taken at angles too steep to properly capture the building façade, resulting in distorted textures. Drone imagery may be required to fully support this feature. Bonus: Created Responsive Grid Other Gallery View New Gallery View When a 360 tour is very dense and lacks a 3D visualization, a gallery view can be beneficial. This also applies to standalone 360 galleries, such as my projects from Japan and Thailand. While adding a gallery view to the Supalai Place tour, I noticed that the existing tool presents all content at once, which may feel overwhelming to some users. I decided to create a new gallery tool with a user experience more aligned with my expectations. Dropdown tabs allow viewers to quickly see available sections before diving into detailed options. I also centered the content on the screen so that users on wide displays can view everything comfortably without excessive head movement. Additional optional design features include single-column layouts that slide in from the side. Overall, the use of tabs and centered content provides a strong experience for navigating dense 360 galleries and offers a practical alternative when a 3D visualization is not available. Summary My current workflow for 3D visualizations works well for single-floor, small-scale spaces but does not yet scale to multi-level, room-dense locations. These exercises helped identify the research paths needed

Week 2 2026: 3D Visualization for Multi-Floor Tours Read More »

Week 1 2026: Efficiency

Accomplishments: Automated Tripod Removal Automated Hotspot Image Adding in Blender Automated Depthmap Tag Translation from Blender to Tour file. Automated Tour File Encryption Bonus: Added button to hide all hotspots in tour 360 tour creation requires manual touch-ups and information for each scene or photo used. This is no big deal for a small room, but for a home with over 100 photos it will take forever. I would rather spend time on new features or new tours. To be more efficient, I programmed four new tools that save me hours when making tours. Automated Tripod Removal Original Bottom Blur Bottom Automated Inpaint Incorrect Manual Inpaint Logo Replacement Tripods in 3D visual 360 photos capture everything, even what’s below them. Modern cameras like the Insta360 series do a fantastic job at removing the tripod pole, which is almost always invisible. The feet are another story. They stick out. While my tripod’s feet are quite small, they appear in every photo. Especially in Virtual Reality (VR), this perception can take the user out of the experience, reminding them they are in a photo. When creating 3D visualizations, these photos are projected onto the space, and the tripod feet get scattered across the floor. Solutions exist that are successful in hiding the tripod. Insta360 offers a tool to hide the space with a logo or picture. You can blur an entire circle of the space, or manually inpaint the area using a tool like Affinity Designer. However, hiding the space with a logo or blur reminds the viewer they are in a photo and breaks immersion. Inpainting works wonderfully, but it’s manual and unacceptable for hundreds of photos. I wanted an automated solution that seamlessly erases the tripod feet as if they were never there. This preserves user immersion and saves hours of manual photo editing. Using ComfyUI’s node editor and an AI image generation model, I programmed an automated solution that removes the tripod feet with an over 85% acceptance rate. The tool takes a folder of images and, for each image, looks only at the bottom. It cuts out a circle where the tripod is expected to be and uses AI to guess what should be there. An 85% success rate is already great and has saved me hours of manual editing. In the future, better models or training a custom model to detect and remove tripod feet could push this number even higher. This experience taught me how to use ComfyUI’s node editor to automate and manipulate 360 images, as well as how to use live projection mode in Affinity Designer for quick manual touch-ups like removing tripods or reflections. Hotspot Adding in Blender When creating 3D visualizations, I use a tool called Blender. This is where I decide how all of the photos fit next to each other. A manual step is required to add the circular “hotspot” that you click to move to another photo. For small tours, this process goes quickly. For large tours with 100 or more images, the time can become unreasonably long. Blender allows scripting using Python. The tool I use to add 360 photos in 3D, called PanoCamAdder+, is written in Python and exposes functionality that can be accessed programmatically. The tool I created lets me pick the image I want for a hotspot. It then checks the unique name assigned to every 360 photo in the scene and uses PanoCamAdder+ to generate a hotspot image for each one. What once took an hour now happens in seconds with the push of a button. Automating Depthmap Tags in tour.xml PCA Depthmap Tour Depthmap To show 3D visualizations, the 360 tour software needs to know how to display the space and where each 360 photo exists within it. This requires taking the locations of the photos from Blender and placing them into a file used by the tour. This is not a simple copy-and-paste process. The data from Blender must be written in a specific and organized format. Manually doing this takes time and increases the chance of misclicks or typos, which costs even more time. I wrote a Python tool to handle these repeatable steps. It takes the data output from Blender and PanoCamAdder+, reads the target file, and inserts the information in the correct locations. Within seconds, I can now apply all location information to every 360 photo with no manual intervention. Encrypting All Tour Files Tour File in Dev Tools When a 360 tour loads in a browser, the code and related files are downloaded to the user’s device. If a client pays for custom features or if you want to protect your work, encryption makes it more difficult for someone to inspect and reuse that code. JavaScript and XML encryption is inherently limited because files must be decrypted to run. It’s similar to a locked gate: it doesn’t make theft impossible, but it reduces opportunities. A typical 360 tour can load dozens of files. Encrypting them individually is easy, but it increases the risk of forgetting a critical file. Using Python, I wrote a script that scans the tour project folder and automatically encrypts every supported file using the existing encryption tool. While this doesn’t save much manual time, it provides peace of mind by ensuring that all files are protected consistently. Bonus: Hide All Hotspots Button Hotspot Overlap Immersion has been on my mind lately. When someone enters a virtual space, they imagine what it feels like to be there. Our minds quickly notice what doesn’t belong. If a room is full of white circles overlapping tables or doorframes, it becomes harder to maintain the illusion. I added a feature that allows users to hide all hotspots with a single click. This lets them immerse themselves more fully in the environment and view the entire 360 photo without navigation elements getting in the way. Summary The new year is off to a great start. I’m curious how much time I’ll save thanks to the effort put into

Week 1 2026: Efficiency Read More »

Recent Post

  • All Post
  • Immersive Media
  • Photography
  • Software Development
  • Travel
  • Web Design
  • Week Reviews

© 2025 Justin Codair