Rivian Blue
Member
- Joined
- Nov 14, 2024
- Messages
- 139
- Reaction score
- 16
- Rivian
- R1S Rivian Blue
RJ made a statement a few days ago and I quote:
“As competition in this space evolves, I think you are going to see [automakers] with more sensors,” Scaringe said at Rivian’s showroom here in late January. “One of the areas where we are different than Tesla — we’ve put more sensors in the vehicle, recognizing that is a way to catch up to what they’ve built using a camera-only system.”
“Gen one is going to get slightly better over time,” Scaringe said regarding driver assistance. “Gen two is going to be wildly better a year from now versus what it is today because of how the system is built.”
Personally, I think he's not wrong. As a Tesla owner, I'm well aware that Tesla had radar and ultrasonic sensors in all their vehicles - and then removed them. I know very well that the 2021 Model S had both ultrasonic sensors (really best for parking) and radar (longer range driving) but the radar was unplugged at a service visit as the software no longer relied on the input.
Whether it's radar, LIDAR or some combo having that topographical sensory input, along with ground truthing/pre-mapping, is a good way to plug a limited visual reference model. That additional sensor data can also be used for ground truthing unmapped roads even if it's not in autonomous mode. That's how you start to close the data gap.
The practical problem is Rivian has a very limited fleet size. 100K R1's isn't enough data being generated, regardless of sensor arrays, vs. 6M+ Teslas. So there is still a data ingestion gap that actually widens every single day. Maybe embedding Rivian sensor arrays in VW next-gen EV's helps close the gap, but you're still looking at a decade before there are enough Rivian-inside vehicles feeding data back to start making a meaningful dent.
However, all is not lost. A clever set of engineers can use simulations built from limited real-world data inputs to accelerate learning. This is where Tesla's investments in computing infrastructure for modeling may push them way ahead - or not due to a visual-first approach. If you've read the recent FSD release notes, Tesla is starting to incorporate a second sensory input - sound - initially to help identify emergency vehicles with sirens on but they could complement vision with sound (hello Doppler effect) to create faux-radar. Frankly, it would be easier with regular radar, but what do I know.
Back to Rivian, jam all the sensors, sure. But I think that might not be enough. I guess we'll just have to wait and see what the future holds for both companies.
“As competition in this space evolves, I think you are going to see [automakers] with more sensors,” Scaringe said at Rivian’s showroom here in late January. “One of the areas where we are different than Tesla — we’ve put more sensors in the vehicle, recognizing that is a way to catch up to what they’ve built using a camera-only system.”
“Gen one is going to get slightly better over time,” Scaringe said regarding driver assistance. “Gen two is going to be wildly better a year from now versus what it is today because of how the system is built.”
Personally, I think he's not wrong. As a Tesla owner, I'm well aware that Tesla had radar and ultrasonic sensors in all their vehicles - and then removed them. I know very well that the 2021 Model S had both ultrasonic sensors (really best for parking) and radar (longer range driving) but the radar was unplugged at a service visit as the software no longer relied on the input.
Whether it's radar, LIDAR or some combo having that topographical sensory input, along with ground truthing/pre-mapping, is a good way to plug a limited visual reference model. That additional sensor data can also be used for ground truthing unmapped roads even if it's not in autonomous mode. That's how you start to close the data gap.
The practical problem is Rivian has a very limited fleet size. 100K R1's isn't enough data being generated, regardless of sensor arrays, vs. 6M+ Teslas. So there is still a data ingestion gap that actually widens every single day. Maybe embedding Rivian sensor arrays in VW next-gen EV's helps close the gap, but you're still looking at a decade before there are enough Rivian-inside vehicles feeding data back to start making a meaningful dent.
However, all is not lost. A clever set of engineers can use simulations built from limited real-world data inputs to accelerate learning. This is where Tesla's investments in computing infrastructure for modeling may push them way ahead - or not due to a visual-first approach. If you've read the recent FSD release notes, Tesla is starting to incorporate a second sensory input - sound - initially to help identify emergency vehicles with sirens on but they could complement vision with sound (hello Doppler effect) to create faux-radar. Frankly, it would be easier with regular radar, but what do I know.
Back to Rivian, jam all the sensors, sure. But I think that might not be enough. I guess we'll just have to wait and see what the future holds for both companies.