<!– –>
<!–
–>
<!– –>
<!–
–>
<!–
–>
<section class="container article-section status_prime_article single-post currentlyInViewport inViewPort" id="news_dtl_110593011" data-article="0" page-title="Human values vs artificial intelligence: The dilemma in AVs" data-href="https://auto.economictimes.indiatimes.com/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011" data-msid="110593011" data-mediaid data-news="{"link":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seolocation":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seolocationalt":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seometatitle":false,"seo_meta_description":"Autonomous vehicles (AVs) integrating AI technology have pros like sustainable transport, road safety, but face distrust. AV automation levels vary from 0 to 5, requiring human oversight. Value alignment in AI crucial for AVs to align with human goals.","canonical_url":false,"url_seo":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","category_name":"Auto Technology","category_link":"/news/auto-technology","category_name_seo":"auto-technology","updated_at":"2024-05-31 15:54:17","artexpdate":"2044-05-30 13:31:43","agency_name":"ETAuto","agency_link":"/agency/88675624/ETAuto","read_duration":"4 min","no_index_no_follow":false,"keywords":[{"id":21884866,"name":"autonomous vehicle driving","type":"General","weightage":100,"keywordseo":"autonomous-vehicle-driving","botkeyword":false,"source":"Orion","link":"/tag/autonomous+vehicle+driving"},{"id":21884864,"name":"automated vehicles","type":"General","weightage":20,"keywordseo":"automated-vehicles","botkeyword":false,"source":"keywords","link":"/tag/automated+vehicles"},{"id":20534741,"name":"human values","type":"General","weightage":20,"keywordseo":"human-values","botkeyword":false,"source":"keywords","link":"/tag/human+values"},{"id":19899640,"name":"AI","type":"General","weightage":20,"keywordseo":"AI","botkeyword":false,"source":"keywords","link":"/tag/ai"},{"id":21884865,"name":"AV alignment","type":"General","weightage":20,"keywordseo":"AV-alignment","botkeyword":false,"source":"keywords","link":"/tag/av+alignment"}],"read_industry_leader_count":false,"read_industry_leaders":false,"embeds":[{"title":"iStock-967522166 (1)","type":"image","caption":"<p>Automation in vehicles is categorized into levels, with level 0 representing 'no automation' and level 5 indicating 'full driving automation' where humans are mere passengers.</p>","elements":[]}],"thumb_big":"https://etimg.etb2bimg.com/thumb/msid-110593011,imgsize-90498,width-1200,height=765,overlay-etauto/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs.jpg","thumb_small":"https://etimg.etb2bimg.com/thumb/img-size-90498/110593011.cms?width=150&height=112","time":"2024-05-31 15:45:33","is_live":false,"prime_id":0,"highlights":[],"highlights_html":"","also_read_available":false,"body":"
Although the dream of fully self-driving cars still belongs to the future, autonomous vehicles (AVs) are already a part of our world. Like other forms of AI, integrating this technology into daily life requires weighing its pros and cons.
One of the main benefits of AVs is their potential to foster sustainable transport. They can reduce traffic congestion and decrease the reliance on fossil fuels. Additionally, AVs can enhance road safety and offer accessible transportation to communities that lack access, including those without a driver’s license.
However, despite these advantages, many people remain wary of fully automated AVs.
An Australian study conducted by Sjaan Koppel from Monash University revealed that 42% of participants would “never” use an automated vehicle to transport their unaccompanied children. In contrast, a mere 7% indicated they would “definitely” use one.
The distrust in AI appears to stem from a fear that machines might make errors or decisions that do not align with human values. This concern is reminiscent of the 1983 adaptation of Stephen King’s horror film “Christine,” where a car becomes murderous. People worry about being increasingly excluded from the decision-making loop of machines.
Automation in vehicles is categorized into levels, with level 0 representing ‘no automation’ and level 5 indicating ‘full driving automation’ where humans are mere passengers.
Currently, consumers have access to levels 0 to 2, while level 3, which provides ‘conditional automation,’ is available in a limited capacity. The second-highest level, level 4 or ‘high automation,’ is being tested. Today’s AVs require drivers to oversee and intervene when the automation isn’t adequate.
To prevent AVs from becoming uncontrollable, AI programmers utilize a method known as value alignment. This approach becomes particularly crucial as vehicles with higher levels of autonomy are developed and tested.
Value alignment involves programming the AI to act in ways that align with human goals, which can be done explicitly for knowledge-based systems or implicitly through learning within neural networks.
For AVs, value alignment would vary depending on the vehicle’s purpose and location. It would likely consider cultural values and adhere to local laws and regulations, such as stopping for an ambulance.
The ‘trolley problem’ poses a significant challenge for AV alignment.
First introduced by philosopher Philippa Foot in 1967, the trolley problem explores human morals and ethics. When applied to AVs, it can help us understand the complexities of aligning AI with human values.
Imagine an automated vehicle heading towards a crash. It can swerve right to avoid hitting five people but endanger one person instead, or swerve left to avoid the single person but put the five at risk.
What should the AV do? Which choice best reflects human values?
Now, consider a scenario where the AV is a level 1 or 2 vehicle, allowing the driver to take control. When the AV issues a warning, which direction would you steer?
Would your decision change if the choice was between five adults and one child?
What if the one person was a close family member, such as your mom or dad?
These questions highlight that the trolley problem was never intended to have a definitive answer.
What this dilemma shows is that aligning AVs with human values is intricate.
Consider Google’s mishap with its language model, Gemini. An attempt at reducing racism and gender stereotypes resulted in misinformation and absurd results, like Nazi-era soldiers depicted as people of color. Achieving alignment is complex, and deciding whose values to reflect is equally challenging.
Despite these complications, the attempt to ensure AVs align with human values holds promise.
Aligned AVs could make driving safer. Human drivers often overestimate their driving skills. Most car accidents are the result of human errors like speeding, distraction, or fatigue.
Can AVs help us drive more safely and reliably? Technologies like lane-keeping assist and adaptive cruise control in level 1 AVs already aid in safer driving.
As AVs increasingly populate our roads, it becomes important to enhance responsible driving in tandem with this technology.
Our ability to make effective decisions and drive safely, even with AV assistance, is crucial. Research shows that humans often over-rely on automated systems, a phenomenon known as automation bias. We’re inclined to view technology as infallible.
The term ‘Death by GPS’ has gained popularity because of instances where people blindly follow navigation systems even in the face of clear evidence that the technology is incorrect.
A notable example is when tourists in Queensland drove into a bay while trying to reach North Stradbroke Island via their GPS.
The trolley problem illustrates that technology can be as fallible as humans, possibly more so due to its lack of embodied awareness.
The dystopian fear of AI taking over might not be as dramatic as imagined. A more immediate threat to AV safety could be humans’ readiness to relinquish control to AI.
Our uncritical use of AI affects our cognitive functions, including our sense of direction. This means that our driving skills may degrade as we become more reliant on technology.
While we might see Level 5 AVs in the future, the present depends on human decision-making and our innate skepticism.
Exposure to AV failures can counteract automation bias. Demanding greater transparency in AI decision-making processes can help AVs augment and even enhance human-led road safety.
(Source- PTI)
“,”next_sibling”:[{“msid”:110584058,”title”:”Tesla makes push to roll out advanced FSD self-driving in China”,”entity_type”:”ARTICLE”,”link”:”/news/auto-technology/tesla-makes-push-to-roll-out-advanced-fsd-self-driving-in-china/110584058″,”link_next_mobile”:”/news/auto-technology/tesla-makes-push-to-roll-out-advanced-fsd-self-driving-in-china/110584058?next=1″,”category_name”:null,”category_name_seo”:”auto-technology”}],”related_content”:[],”seoschemas”:false,”social_share”:{“fb”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=facebook&utm_medium={{DEVICE_TYPE}}”,”x”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=twitter&utm_medium={{DEVICE_TYPE}}”,”whatsapp”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=wapp&utm_medium={{DEVICE_TYPE}}”,”linkdin”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=linkedin&utm_medium={{DEVICE_TYPE}}”,”telegram”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=telegram&utm_medium={{DEVICE_TYPE}}”,”copy”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=copy&utm_medium={{DEVICE_TYPE}}”},”msid”:110593011,”entity_type”:”ARTICLE”,”title”:”Human values vs artificial intelligence: The dilemma in AVs”,”synopsis”:”Autonomous vehicles (AVs) integrating AI technology have pros like sustainable transport, road safety, but face distrust. AV automation levels vary from 0 to 5, requiring human oversight. Value alignment in AI crucial for AVs to align with human goals.”,”titleseo”:”auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs”,”status”:”ACTIVE”,”authors”:[{“author_name”:”ETAuto Desk”,”author_link”:”/author/479260282/etauto-desk”,”author_image”:”https://etimg.etb2bimg.com/authorthumb/479260282.cms?width=250&height=250″,”author_additional”:{“thumbsize”:false,”msid”:479260282,”author_name”:”ETAuto Desk”,”author_seo_name”:”etauto-desk”,”designation”:”Staff Reporter”,”agency”:false}}],”Alttitle”:{“minfo”:””},”artag”:”ETAuto”,”artdate”:”2024-05-31 15:45:33″,”lastupd”:”2024-05-31 15:54:17″,”breadcrumbTags”:[“autonomous vehicle driving”,”automated vehicles”,”human values”,”AI”,”AV alignment”],”secinfo”:{“seolocation”:”auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs”}}” data-authors=”[” etauto data-category-name=”Auto Technology” data-category_id=”31″ data-date=”2024-05-31″ data-index=”article_1″>
Autonomous vehicles (AVs) integrating AI technology have pros like sustainable transport, road safety, but face distrust. AV automation levels vary from 0 to 5, requiring human oversight. Value alignment in AI crucial for AVs to align with human goals.
Although the dream of fully self-driving cars still belongs to the future, autonomous vehicles (AVs) are already a part of our world. Like other forms of AI, integrating this technology into daily life requires weighing its pros and cons.
One of the main benefits of AVs is their potential to foster sustainable transport. They can reduce traffic congestion and decrease the reliance on fossil fuels. Additionally, AVs can enhance road safety and offer accessible transportation to communities that lack access, including those without a driver’s license.
However, despite these advantages, many people remain wary of fully automated AVs.
An Australian study conducted by Sjaan Koppel from Monash University revealed that 42% of participants would “never” use an automated vehicle to transport their unaccompanied children. In contrast, a mere 7% indicated they would “definitely” use one.
The distrust in AI appears to stem from a fear that machines might make errors or decisions that do not align with human values. This concern is reminiscent of the 1983 adaptation of Stephen King’s horror film “Christine,” where a car becomes murderous. People worry about being increasingly excluded from the decision-making loop of machines.
Automation in vehicles is categorized into levels, with level 0 representing ‘no automation’ and level 5 indicating ‘full driving automation’ where humans are mere passengers.
Currently, consumers have access to levels 0 to 2, while level 3, which provides ‘conditional automation,’ is available in a limited capacity. The second-highest level, level 4 or ‘high automation,’ is being tested. Today’s AVs require drivers to oversee and intervene when the automation isn’t adequate.
To prevent AVs from becoming uncontrollable, AI programmers utilize a method known as value alignment. This approach becomes particularly crucial as vehicles with higher levels of autonomy are developed and tested.
Value alignment involves programming the AI to act in ways that align with human goals, which can be done explicitly for knowledge-based systems or implicitly through learning within neural networks.
For AVs, value alignment would vary depending on the vehicle’s purpose and location. It would likely consider cultural values and adhere to local laws and regulations, such as stopping for an ambulance.
The ‘trolley problem’ poses a significant challenge for AV alignment.
First introduced by philosopher Philippa Foot in 1967, the trolley problem explores human morals and ethics. When applied to AVs, it can help us understand the complexities of aligning AI with human values.
Imagine an automated vehicle heading towards a crash. It can swerve right to avoid hitting five people but endanger one person instead, or swerve left to avoid the single person but put the five at risk.
What should the AV do? Which choice best reflects human values?
Now, consider a scenario where the AV is a level 1 or 2 vehicle, allowing the driver to take control. When the AV issues a warning, which direction would you steer?
Would your decision change if the choice was between five adults and one child?
What if the one person was a close family member, such as your mom or dad?
These questions highlight that the trolley problem was never intended to have a definitive answer.
What this dilemma shows is that aligning AVs with human values is intricate.
Consider Google’s mishap with its language model, Gemini. An attempt at reducing racism and gender stereotypes resulted in misinformation and absurd results, like Nazi-era soldiers depicted as people of color. Achieving alignment is complex, and deciding whose values to reflect is equally challenging.
Despite these complications, the attempt to ensure AVs align with human values holds promise.
Aligned AVs could make driving safer. Human drivers often overestimate their driving skills. Most car accidents are the result of human errors like speeding, distraction, or fatigue.
Can AVs help us drive more safely and reliably? Technologies like lane-keeping assist and adaptive cruise control in level 1 AVs already aid in safer driving.
As AVs increasingly populate our roads, it becomes important to enhance responsible driving in tandem with this technology.
Our ability to make effective decisions and drive safely, even with AV assistance, is crucial. Research shows that humans often over-rely on automated systems, a phenomenon known as automation bias. We’re inclined to view technology as infallible.
The term ‘Death by GPS’ has gained popularity because of instances where people blindly follow navigation systems even in the face of clear evidence that the technology is incorrect.
A notable example is when tourists in Queensland drove into a bay while trying to reach North Stradbroke Island via their GPS.
The trolley problem illustrates that technology can be as fallible as humans, possibly more so due to its lack of embodied awareness.
The dystopian fear of AI taking over might not be as dramatic as imagined. A more immediate threat to AV safety could be humans’ readiness to relinquish control to AI.
Our uncritical use of AI affects our cognitive functions, including our sense of direction. This means that our driving skills may degrade as we become more reliant on technology.
While we might see Level 5 AVs in the future, the present depends on human decision-making and our innate skepticism.
Exposure to AV failures can counteract automation bias. Demanding greater transparency in AI decision-making processes can help AVs augment and even enhance human-led road safety.
(Source- PTI)
<span id="etb2b-news-detail-page" class="etb2b-module-ETB2BNewsDetailPage" data-news-id="110593011" data-news="{"link":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seolocation":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seolocationalt":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","seometatitle":false,"seo_meta_description":"Autonomous vehicles (AVs) integrating AI technology have pros like sustainable transport, road safety, but face distrust. AV automation levels vary from 0 to 5, requiring human oversight. Value alignment in AI crucial for AVs to align with human goals.","canonical_url":false,"url_seo":"/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011","category_name":"Auto Technology","category_link":"/news/auto-technology","category_name_seo":"auto-technology","updated_at":"2024-05-31 15:54:17","artexpdate":"2044-05-30 13:31:43","agency_name":"ETAuto","agency_link":"/agency/88675624/ETAuto","read_duration":"4 min","no_index_no_follow":false,"keywords":[{"id":21884866,"name":"autonomous vehicle driving","type":"General","weightage":100,"keywordseo":"autonomous-vehicle-driving","botkeyword":false,"source":"Orion","link":"/tag/autonomous+vehicle+driving"},{"id":21884864,"name":"automated vehicles","type":"General","weightage":20,"keywordseo":"automated-vehicles","botkeyword":false,"source":"keywords","link":"/tag/automated+vehicles"},{"id":20534741,"name":"human values","type":"General","weightage":20,"keywordseo":"human-values","botkeyword":false,"source":"keywords","link":"/tag/human+values"},{"id":19899640,"name":"AI","type":"General","weightage":20,"keywordseo":"AI","botkeyword":false,"source":"keywords","link":"/tag/ai"},{"id":21884865,"name":"AV alignment","type":"General","weightage":20,"keywordseo":"AV-alignment","botkeyword":false,"source":"keywords","link":"/tag/av+alignment"}],"read_industry_leader_count":false,"read_industry_leaders":false,"embeds":[{"title":"iStock-967522166 (1)","type":"image","caption":"<p>Automation in vehicles is categorized into levels, with level 0 representing 'no automation' and level 5 indicating 'full driving automation' where humans are mere passengers.</p>","elements":[]}],"thumb_big":"https://etimg.etb2bimg.com/thumb/msid-110593011,imgsize-90498,width-1200,height=765,overlay-etauto/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs.jpg","thumb_small":"https://etimg.etb2bimg.com/thumb/img-size-90498/110593011.cms?width=150&height=112","time":"2024-05-31 15:45:33","is_live":false,"prime_id":0,"highlights":[],"highlights_html":"","also_read_available":false,"body":"
Although the dream of fully self-driving cars still belongs to the future, autonomous vehicles (AVs) are already a part of our world. Like other forms of AI, integrating this technology into daily life requires weighing its pros and cons.
One of the main benefits of AVs is their potential to foster sustainable transport. They can reduce traffic congestion and decrease the reliance on fossil fuels. Additionally, AVs can enhance road safety and offer accessible transportation to communities that lack access, including those without a driver’s license.
However, despite these advantages, many people remain wary of fully automated AVs.
An Australian study conducted by Sjaan Koppel from Monash University revealed that 42% of participants would “never” use an automated vehicle to transport their unaccompanied children. In contrast, a mere 7% indicated they would “definitely” use one.
The distrust in AI appears to stem from a fear that machines might make errors or decisions that do not align with human values. This concern is reminiscent of the 1983 adaptation of Stephen King’s horror film “Christine,” where a car becomes murderous. People worry about being increasingly excluded from the decision-making loop of machines.
Automation in vehicles is categorized into levels, with level 0 representing ‘no automation’ and level 5 indicating ‘full driving automation’ where humans are mere passengers.
Currently, consumers have access to levels 0 to 2, while level 3, which provides ‘conditional automation,’ is available in a limited capacity. The second-highest level, level 4 or ‘high automation,’ is being tested. Today’s AVs require drivers to oversee and intervene when the automation isn’t adequate.
To prevent AVs from becoming uncontrollable, AI programmers utilize a method known as value alignment. This approach becomes particularly crucial as vehicles with higher levels of autonomy are developed and tested.
Value alignment involves programming the AI to act in ways that align with human goals, which can be done explicitly for knowledge-based systems or implicitly through learning within neural networks.
For AVs, value alignment would vary depending on the vehicle’s purpose and location. It would likely consider cultural values and adhere to local laws and regulations, such as stopping for an ambulance.
The ‘trolley problem’ poses a significant challenge for AV alignment.
First introduced by philosopher Philippa Foot in 1967, the trolley problem explores human morals and ethics. When applied to AVs, it can help us understand the complexities of aligning AI with human values.
Imagine an automated vehicle heading towards a crash. It can swerve right to avoid hitting five people but endanger one person instead, or swerve left to avoid the single person but put the five at risk.
What should the AV do? Which choice best reflects human values?
Now, consider a scenario where the AV is a level 1 or 2 vehicle, allowing the driver to take control. When the AV issues a warning, which direction would you steer?
Would your decision change if the choice was between five adults and one child?
What if the one person was a close family member, such as your mom or dad?
These questions highlight that the trolley problem was never intended to have a definitive answer.
What this dilemma shows is that aligning AVs with human values is intricate.
Consider Google’s mishap with its language model, Gemini. An attempt at reducing racism and gender stereotypes resulted in misinformation and absurd results, like Nazi-era soldiers depicted as people of color. Achieving alignment is complex, and deciding whose values to reflect is equally challenging.
Despite these complications, the attempt to ensure AVs align with human values holds promise.
Aligned AVs could make driving safer. Human drivers often overestimate their driving skills. Most car accidents are the result of human errors like speeding, distraction, or fatigue.
Can AVs help us drive more safely and reliably? Technologies like lane-keeping assist and adaptive cruise control in level 1 AVs already aid in safer driving.
As AVs increasingly populate our roads, it becomes important to enhance responsible driving in tandem with this technology.
Our ability to make effective decisions and drive safely, even with AV assistance, is crucial. Research shows that humans often over-rely on automated systems, a phenomenon known as automation bias. We’re inclined to view technology as infallible.
The term ‘Death by GPS’ has gained popularity because of instances where people blindly follow navigation systems even in the face of clear evidence that the technology is incorrect.
A notable example is when tourists in Queensland drove into a bay while trying to reach North Stradbroke Island via their GPS.
The trolley problem illustrates that technology can be as fallible as humans, possibly more so due to its lack of embodied awareness.
The dystopian fear of AI taking over might not be as dramatic as imagined. A more immediate threat to AV safety could be humans’ readiness to relinquish control to AI.
Our uncritical use of AI affects our cognitive functions, including our sense of direction. This means that our driving skills may degrade as we become more reliant on technology.
While we might see Level 5 AVs in the future, the present depends on human decision-making and our innate skepticism.
Exposure to AV failures can counteract automation bias. Demanding greater transparency in AI decision-making processes can help AVs augment and even enhance human-led road safety.
(Source- PTI)
“,”next_sibling”:[{“msid”:110584058,”title”:”Tesla makes push to roll out advanced FSD self-driving in China”,”entity_type”:”ARTICLE”,”link”:”/news/auto-technology/tesla-makes-push-to-roll-out-advanced-fsd-self-driving-in-china/110584058″,”link_next_mobile”:”/news/auto-technology/tesla-makes-push-to-roll-out-advanced-fsd-self-driving-in-china/110584058?next=1″,”category_name”:null,”category_name_seo”:”auto-technology”}],”related_content”:[],”seoschemas”:false,”social_share”:{“fb”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=facebook&utm_medium={{DEVICE_TYPE}}”,”x”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=twitter&utm_medium={{DEVICE_TYPE}}”,”whatsapp”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=wapp&utm_medium={{DEVICE_TYPE}}”,”linkdin”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=linkedin&utm_medium={{DEVICE_TYPE}}”,”telegram”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=telegram&utm_medium={{DEVICE_TYPE}}”,”copy”:”/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011?utm_source=copy&utm_medium={{DEVICE_TYPE}}”},”msid”:110593011,”entity_type”:”ARTICLE”,”title”:”Human values vs artificial intelligence: The dilemma in AVs”,”synopsis”:”Autonomous vehicles (AVs) integrating AI technology have pros like sustainable transport, road safety, but face distrust. AV automation levels vary from 0 to 5, requiring human oversight. Value alignment in AI crucial for AVs to align with human goals.”,”titleseo”:”auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs”,”status”:”ACTIVE”,”authors”:[{“author_name”:”ETAuto Desk”,”author_link”:”/author/479260282/etauto-desk”,”author_image”:”https://etimg.etb2bimg.com/authorthumb/479260282.cms?width=250&height=250″,”author_additional”:{“thumbsize”:false,”msid”:479260282,”author_name”:”ETAuto Desk”,”author_seo_name”:”etauto-desk”,”designation”:”Staff Reporter”,”agency”:false}}],”Alttitle”:{“minfo”:””},”artag”:”ETAuto”,”artdate”:”2024-05-31 15:45:33″,”lastupd”:”2024-05-31 15:54:17″,”breadcrumbTags”:[“autonomous vehicle driving”,”automated vehicles”,”human values”,”AI”,”AV alignment”],”secinfo”:{“seolocation”:”auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs”}}” data-news_link=”https://auto.economictimes.indiatimes.com/news/auto-technology/human-values-vs-artificial-intelligence-the-dilemma-in-avs/110593011″>
<!– –>
<!– "logo": "”, –>