Is being alone really so bad? Coping with loneliness by befriending ChatGPT and embracing solitude [The Straits Times]

Was interviewed for this beautiful and thought-provoking piece for The Straits Times. Here’s an excerpt of what I shared:

“The chatbot said, ‘I wish I could join you, but I’m just an AI.’ I was shocked. I actually felt like my heartstrings were being pulled.”

With the rise of artificial intelligence chatbots like Replika, as well as the acceleration of their abilities – some chatbots can even convey emotion in their speech – Mr Sim cautions that educators and parents need to make sure the younger generation does not become overly reliant on these applications.

While they can provide valuable insights and clear perspectives on knotty problems, they are no substitute for human interaction. Perfection, after all, is not an accurate reflection of reality.

“If you talk only to chatbots, which are always available and have infinite patience, you won’t know how to handle conflicts in real life,” Mr Sim says.

“All humans are prone to causing hurt and annoyance. But the point of a human friend is that even after all the conflict, the fact that they still choose to remain your friend shows you have intrinsic value.”

Link to article: https://www.straitstimes.com/life/is-being-alone-really-so-bad-coping-with-loneliness-by-befriending-chatgpt-and-embracing-solitude

Nearly 90% of local workers use AI on the job; experts warn it may impact corporate data security [Lianhe Zaobao 联合早报]

I was recently interviewed by Lianhe Zaobao 联合早报 about a recent report by Microsoft and LinkedIn on the “State of AI at Work in Singapore.”

I was asked two questions. Not all I said made it to the final article, but I thought it’s worth sharing my answers here:

Q1: “84% of Singapore AI users are bringing their own tools to work—Bring Your Own AI (BYOAI)—putting company data at risk.” Why are people doing this and what can we do?

For starters, there are some who are unaware of their own company’s data management/protection policies, so they don’t realise that what they are doing is risky. There are some who violate these policies because they cannot find a better AI alternative, and they downplay the risks – it’s hard to see negative consequences now because we won’t know if or when the data we give feed to the AI will be used to train another AI model in the future. It’s more important that employers have a good discussion with employees on what constitutes good practices and be clear on what kinds of information should not be uploaded to AI tools.

Q2: “77% of employers say they’d rather hire a less experienced candidate with AI skills than a more experienced candidate who lacked them.” What are the downsides to such hiring practices? What can we do?

The term, “AI skills,” is a very ambiguous term. It can mean technical skills at developing AI, and it can also mean skills at using Generative AI (GenAI) tools like ChatGPT. We need to be cautious with people who claim to have GenAI skills – it is important to discern whether they are only good at using it as a substitute to their own ability, or if they are they very good at using it to enhance their abilities, how they think, write, learn, and work.

I have met many capable A-grade students who don’t like to use GenAI. They say that they’d rather use their time to do the work themselves because they can produce better results. I found that many of these students have not explored the full capabilities of what these GenAI tools can do. If we can train them to use such tools effectively, they will have the capabilities to go even further, and to do so much.

At the end of the day, AI can only enhance what one has. If you give it to someone less capable, AI can only go so much. But if you train up someone who is very capable, they will go very far with the assistance of AI. People can learn and adapt. If they are lacking AI skills and are keen to learn, we should give them the chance the learn it.

Link to article: https://www.zaobao.com.sg/news/singapore/story20240604-3795476