Ben-onie P. Barroga | Jan 28, 2025
Updated: Apr 29, 2025
Understanding ourselves as something and someone different from something and someone else is what makes us take responsibility for something and someone other than ourselves. As humans, it is never in our self-interest to serve only our self-interest. As the phenomenon of suicide proves, survival is never enough.
As individuals as well as humanity, we need a greater purpose than to avoid extinction. And we need each other to remind us of that. That’s why this article is titled "AI Is Not An Existential Threat, But Humans Using AI Are:" Because the pursuit of technological maturity risks leading to ex-istential immaturity. And not only in the people who develop the technology, but also in those who use it.
The greatest threat to humans is not technology, but humans who have forgotten that there is more to being human than what can be imitated and replaced by technology. Just as it is not for existential philosophers to say whether advanced technology will be able to extend our lives indefinitely, it should not be for math experts and tech founders to tell us how to understand and live our lives. Deciding what is worth living and striving for is a job for each and everyone of us. One that AI can never replace.
Artificial intelligence is no longer a future disruptor – it’s already transforming healthcare delivery. From AI-assisted diagnostics to predictive analytics and virtual surgical planning, healthcare AI is reshaping how care is delivered, how clinical resources are managed, and how patients engage with the system.
Ben-onie P. Barroga | Jan 28, 2025
Updated: Apr 30, 2025
Generative Adversarial Networks (GANs) was first introduced by Ian Goodfellow in 2014. GANs are a powerful class of neural networks that are used for unsupervised learning. GANs can create anything whatever you feed to them, as it Learn-Generate-Improve. To understand GANs first you must have little understanding of Convolutional Neural Networks. CNNs are trained to classify images with respect to their labels if an image is fed to a CNN, it analyzes the image pixel by pixel and is passed through nodes present in CNN’s hidden layers and as an output, it tells what the image is about or what it sees in the image. For example: If a CNN is trained to classify dogs and cats and an image is fed to this CNN, it can tell whether there is a dog or a cat in that image. Therefore, it can also be called a classification algorithm. How are GANs different? GANs can be divided into two parts which are the Generator and the Discriminator. Discriminator – This part of GANs can be considered similar to what CNNs does. Discriminator is a Convolutional Neural Network consisting of many hidden layers and one output layer. The major difference here is that the output layer of GANs can have only two outputs, unlike CNNs, which can have outputs respect to the number of labels it is trained on. The output of the discriminator can be either 1 or 0 because of a specifically chosen activation function for this task, if the output is 1 then the provided data is real and if the output is 0 then it refers to it as fake data. Discriminator is trained on the real data so it learns to recognize how actual data looks like and what features should the data have to be classified as real.
Generator – From the name itself, we can understand that it’s a generative algorithm. Generator is an Inverse Convolutional Neural Net, it does exactly opposite of what a CNN does, because in CNN an actual image is given as an input and a classified label is expected as an output but in Generator, a random noise (a vector having some values to be precise) is given as an input to this Inverse CNN and an actual image is expected as an output. In simple terms, it generates data from a piece of data using its own imagination.
As shown in the above image, a random value vector is given as input to Inverse-CNN and after getting passed through the hidden layers and activation functions an image is received as the output. Working of both Generator and Discriminator together: As we already discussed Discriminator is trained on actual data to classify whether given data is true or not, so Discriminator’s work is to tell what’s real and what’s fake. Now the Generator starts to generate data from a random input and then that generated data is passed to Discriminator as input now Discriminator analyzes the data and checks how close it is to be classified as real, if the generated data does not contain enough features to be classified as real by the Discriminator, then this data and weights associated with it are sent back to the Generator using backpropagation, so that it can readjust the weights associated with the data and create new data which is better than the previous one. This freshly generated data is again passed to the Discriminator, and it continues. This process keeps repeating as long as the Discriminator keeps classifying the generated data as fakes, for every time data is classified as fake and with every backpropagation the quality of data keeps getting better and better and there comes a time when the Generator becomes so accurate that it becomes tough to distinguish between the real data and the data generated by the Generator.
Simple terms, Discriminator is a trained guy who can tell what’s real and what’s fake and Generator is trying to fool the Discriminator and make him believe that the generated data is real, with each unsuccessful attempt Generator learns and improves itself to produce data more real like. It can also be stated as a competition between Generator and Discriminator.
Sample code for generator and discriminator:
1. Building the generator
a. What input to pass input to first layer of generator in initial stage: random_normal_dimensions which is a hyperparameter that defines how many random numbers in a vector you’ll want to feed into the generator as a starting point for generating images.
b. Next point to be noted is that here we have used “selu” activation function instead of “relu” because “relu” has an effect of removing noise when classifying data by preventing negative values from cancelling out positive ones but in GANs we don’t wants to remove data.
# You'll pass the random_normal_dimensions to the first dense layer of the generator
random_normal_dimensions = 32
### START CODE HERE ###
generator = keras.models.Sequential([
keras.layers.Dense(7 * 7 * 128, input_shape=[random_normal_dimensions]),
keras.layers.Reshape([7, 7, 128]),
keras.layers.BatchNormalization(),
keras.layers.Conv2DTranspose(64, kernel_size=5, strides=2, padding="SAME",
activation="selu"),
keras.layers.BatchNormalization(),
keras.layers.Conv2DTranspose(1, kernel_size=5, strides=2, padding="SAME",
activation="tanh")
])
### END CODE HERE ###
2. Building the discriminator:
### START CODE HERE ###
discriminator = keras.models.Sequential([
keras.layers.Conv2D(64, kernel_size=5, strides=2, padding="SAME",
activation=keras.layers.LeakyReLU(0.2),
input_shape=[28, 28, 1]),
keras.layers.Dropout(0.4),
keras.layers.Conv2D(128, kernel_size=5, strides=2, padding="SAME",
activation=keras.layers.LeakyReLU(0.2)),
keras.layers.Dropout(0.4),
keras.layers.Flatten(),
keras.layers.Dense(1, activation="sigmoid")
])
### END CODE HERE ###
3. Compiling the discriminator:
Here we are compiling the discriminator with a binary_crossentropy loss and rmsprop optimizer.
Set the discriminator to not train on its weights (set its “trainable” field).
### START CODE HERE ###
discriminator.compile(loss="binary_crossentropy", optimizer="rmsprop")
discriminator.trainable = False
### END CODE HERE ###
4. Build and compile the GAN model :
Build the sequential model for the GAN, passing a list containing the generator and discriminator.
Compile the model with a binary cross entropy loss and rmsprop optimizer.
### START CODE HERE ###
gan = keras.models.Sequential([generator, discriminator])
gan.compile(loss="binary_crossentropy", optimizer="rmsprop")
### END CODE HERE ###
5. Train the GAN :
Phase 1
real_batch_size: Get the batch size of the input batch (it’s the zero-th dimension of the tensor)
noise: Generate the noise using tf.random.normal. The shape is batch size x random_normal_dimension
fake images: Use the generator that you just created. Pass in the noise and produce fake images.
mixed_images: concatenate the fake images with the real images.
Set the axis to 0.
discriminator_labels: Set to 0. for real images and 1. for fake images.
Set the discriminator as trainable.
Use the discriminator’s train_on_batch() method to train on the mixed images and the discriminator labels.
Phase 2
noise: generate random normal values with dimensions batch_size x random_normal_dimensions
Use real_batch_size.
Generator_labels: Set to 1. to mark the fake images as real
The generator will generate fake images that are labeled as real images and attempt to fool the discriminator.
Set the discriminator to NOT be trainable.
Train the GAN on the noise and the generator labels.
Ben-onie P. Barroga | Jan 25, 2025
Updated: Apr 29, 2025
With our research ,organizations gaining wider access to conversational gen AI models, many managers now understand that technology can go far beyond simple productivity tasks. Two-thirds of managers now believe that gen AI has the potential to be their thought partner, providing fresh perspectives, weighing pros and cons, evaluating trade-offs, enhancing strategic thinking, and supporting leadership development. But while expectations for the future are high, there is a long way to go. Today only 30% of managers’ report that they have the skills and knowledge to use gen AI in this way.
Fortunately, managers can bridge the gap quickly. Based on our experience, countless conversations, and through thousands of iterations of prompts, we have created an HBR Guide with instructions and tools to help managers use any conversational gen AI tool as a thought partner.
With these techniques, humans and AI working together can achieve high-quality outcomes that would not be possible working separately. We call this process “co-thinking,” and this article will show you how to get started.
The Power of Co-Thinking
Gen AI can be used as a thought partner for anything from speech preparation to root-cause analysis to strategy formulation. Let’s look at a few examples:
·Ben is a project manager at a manufacturing plant, and he and his team often must solve technical challenges. Uncovering root causes of problems is a recurring, yet complex task. Mario has designed a structured dialogue with gen AI to help him think through each problem. Whenever a new one arises, he fine-tunes and tests his prompt sequence.
·Kim is a communications manager at a large consumer goods company. She has been using gen AI both to draft press releases and to weigh how different stakeholders might react to them. Kim engages in a two-way conversation asking gen AI to act as specific stakeholders and debate with her about the content of each draft. Kim’s rule of thumb is to challenge and be challenged by AI during the dialogue.
·Amy is a finance leader at a large technology company who is tasked with promoting a change initiative that will transform her department from an internal service provider into a strategic partner for business units. Amy has created a gen AI prompt sequence that her team members can run to help them understand and reflect on how to embrace the new mindset and behaviors that this initiative will require.
These stories shed light on a way of interacting with AI that goes far beyond simply answering questions or pushing a button and getting an output. It involves an active back-and-forth conversation, examples of which can be seen in the below table, where both the human and the AI contribute ideas and build upon each other’s thoughts at every step of the dialogue.
Actions for Managers and AI in Co-Thinking Dialogues
Humans and AI contribute ideas and build upon each other’s thoughts in different ways.
What the manager can do in the dialogue or meeting
● Provide context
● Give input
● Define criteria
● Offer feedback
● Make comments
● Add/drop options or ideas
● Select
● Check
● Validate
● Decide
What the AI can do in the dialogue or meeting
● Articulate
● Exemplify
● Give options
● Wear different hats
● Propose
● Elaborate
● Suggest
● Analyze pros and cons
● Give different perspectives
● Challenge opinions
The value of a co-thinking dialogue is not only in the output but also in the dialogue itself — in fact value is co-created through the various steps of the human–AI conversation.
Executives in many roles have spoken about using gen AI as a co-thinker. Masayoshi Son of Softbank has said, “I am chatting with ChatGPT every day,” brainstorming new ideas and business strategies. Jeff Maggioncalda of Coursera called gen AI “an incredible thought partner changing the way I do my job.”
Outside the C-suite, we met managers at all levels who embrace co-thinking. Valentin Marguet, a project manager at Ferrari, shared that “For technical problem-solving, generative AI can act as a co-thinking partner, serving as a methodological expert while guiding structured thought processes to systematically explore root causes.” We believe that these approaches can help any leader or manager and that the time to start is now.
How to Design a Co-Thinking Dialogue with AI
We have created a four-step framework to help managers successfully design a co-thinking dialogue with gen AI.
Step 1: Assign a role to AI. Define the specific role you want AI to take, such as acting like an expert, a team member, or a devil’s advocate. Use introductory phrases like “Act as…”, “Help me…”, “The output of the dialogue will be…”, etc.
Step 2: Define the setting. Choose the setting in which you’d like the conversation to take place. Who is taking part in the conversation? Should it be a one-to-one interaction between you and AI or as a one-to-many interaction (for example, at a team meeting or workshop)?
Step 3: Outline the dialogue. Envision a sequence of questions and statements that will clarify who does what between you and the AI. Define closely what the AI will bring and what the human will bring; leaving the flow of the dialogue to chance increases the risk that it will go off track.
Step 4: Create the prompt. Write the prompt based on the outline you drafted. You can use gen AI to help you refine the prompt further, for instance you can type in the chat: “I have this outline of a dialogue and need your help turning it into a structured prompt that you can run with me: ‘[insert your outline here].’”
The sidebar “Two Co-Thinking Dialogues to Try” shows you two prompts that were created using this framework. We encourage you to copy and paste them into any gen AI model of your choosing and start your dialogue now. After you have proceeded through the dialogue once, try modifying the prompt to address a professional challenge you are facing — or design a new dialogue from scratch. The more you experiment with co-thinking dialogues, the faster your skills will improve.
Two Co-Thinking Dialogues to Try
Below are two prompts to help you try out co-thinking with AI. Copy and paste either of the dialogues
...
You (Gen AI) will act as an expert of systems thinking in collaborative innovation, capable of taking the perspective of multiple stakeholders.
I want you (Gen AI) to guide me through the following steps. Always ask for my feedback before the next step.
[Step 1] Gen AI asks the manager to share the problem and the list of external stakeholders that need to be involved to solve it, explaining why the company alone cannot tackle it
[Step 2] Gen AI suggests three other stakeholders that are overlooked. The manager provides feedback and validates the revised list of stakeholders.
[Step 3] Gen AI creates a table with four columns: stakeholders, their specific needs, unresolved pain points, and associated root causes. The manager provides feedback and validates the table.
[Step 4] Gen AI asks the manager to select the three most critical stakeholders. Then, it suggests three red flags and mitigation actions for each stakeholder. The manager provides feedback.
[Step 5] Gen AI suggests three immediate next steps to initiate engagement with each stakeholder.
You (Gen AI) will act as an expert in mindset change, providing methodological coaching on what is needed and how it can be practiced concretely.
I want you (Gen AI) to guide me through the following steps. Always ask for my feedback before the next step.
[Step 1] Gen AI asks the manager to share contextual information about the organization, the new mindset to adopt, and the reason for the shift. Then it elaborates with examples or situational scenarios to clarify what the new mindset means in practice and how to embody it.
[Step 2] Gen AI asks the manager three questions to help evaluate the current maturity of the individual’s mindset shift. The manager reflects on the results of this self-evaluation and prioritizes the areas of opportunity to reduce the gap versus the desired mindset.
[Step 3] Based on the selected areas of opportunity, gen AI asks the manager to share an example from daily work.
[Step 4] Gen AI suggests three good practices, tips, and routines that the manager can apply. The manager comments and selects one.
[Step 5] Gen AI recommends rejoining the dialogue after a few weeks to discuss any challenges that have arisen in implementing the selected practice.
Getting the Most out of Your Gen AI Dialogues
Our research brought us into contact with Jessica Tompkinson, a former global head of communications and corporate affairs at Unilever Operations. She shared some of her ups and downs of learning to use gen AI as a co-thinker. “Looking back at my first prompts, what was coming back was useless for what I needed. I had to change my approach and started to converse with gen AI like I would a colleague or an agency. The back-and-forth conversation significantly improved the outcome and allowed the tools to learn more in the process.”
As you begin to use gen AI as a co-thinker, expect mixed results at first. Following the best practices in this list of do’s and don’ts can help accelerate your learning curve.
Dos
·Embrace a conversational mindset: Approach gen AI as if you’re talking to a human collaborator. The machine can operate similarly to the Socratic method, enhancing your critical thinking through dialogue. You can prompt gen AI to drive you through a structured dialogue about a topic or issue you choose, elaborating on your answers, adding overlooked dimensions, and leading to follow-up questions.
·Participate actively: You must engage with the machine in a back-and-forth interaction. Both parties contribute with feedback and mutual challenges, and ultimately co-create the output together. To receive appropriate advice, tailored answers, and helpful suggestions, start by providing a brief and accurate overview of your situation with clear context. Then, build on it by engaging actively — share your reflections and personal experiences, and ask follow-up questions.
·Challenge the AI: As in a dialogue with a human, some friction in the thinking process is valuable. Ask the AI to provide different perspectives, ideas, or overlooked options. Don’t stop at the first generated output or conform too quickly to what the AI gives you. For example, ask “Are we missing any important aspects?” Or, “What if we looked at this problem from the point of view of …?”
Don’ts
·Go it alone: When people work predominantly alone with generative AI, rather than actively involving and engaging with human teammates, it can reduce interpersonal communication and knowledge sharing within the team. This can hinder team-based judgment and the ability to prevent and mitigate risks. Take breaks from solo AI interactions to engage face-to-face with teammates, involve other colleagues in the AI-aided process, seek feedback from experts, integrate diverse viewpoints, and encourage peer learning.
·Go too fast: You may tend to type, click, or advance too hastily. The speed at which gen AI executes can lead humans to rush through without reflection. Request AI to help you pause and reflect: Explicitly prompt AI to ask for your feedback, ensuring you have time to reflect and validate before moving forward.
Soon, managers who develop their gen AI capabilities will be leaping ahead of managers who don’t. It’s time to get your hands dirty with gen AI, understanding how to use it for yourself, your team, and the business. Identify tasks where to experiment with co-thinking. Prioritize, keep track of the benefits, the risks, and lessons learned, and share among your team. It’s your job to act as a co-thinking role model, encouraging your team to discuss what is going well and what could be improved, and collecting learnings in a structured way as technology evolves.
Ben-onie P. Barroga | Jan 24, 2025
Updated: Apr 29, 2025
Recently, I realized that AI and its impact is among the things that come to my mind at least once a day. With the advancement of technology, artificial intelligence is gradually becoming a norm in our lives. Whether at work, leisure, or in daily routines, we experience the convenience and transformation brought by AI. However, for many, AI remains an abstract concept, its workings unclear.
Artificial intelligence is a technology that emulates human cognitive abilities and thinking processes. It encompasses various areas like machine learning, deep learning, natural language processing, and computer vision to simulate human intelligence, enabling computers to perform complex tasks like image recognition, speech analysis, decision-making, and natural language understanding.
AI's applications span across diverse fields such as medical imaging analysis, financial risk assessment, manufacturing automation, and intelligent systems management. From healthcare to finance, manufacturing, and beyond, AI continues to expand its role, promising a future with even broader applications.
I read a comprehensive paper titled "Consciousness in Artificial Intelligence: Insights from Consciousness Science" by Turing Award laureate Yoshua Bengio and collaborators from philosophy, neuroscience, and I found it really thought-provoking. It delves into the consciousness debate within AI systems. It explores mainstream consciousness theories and the possibilities of constructing conscious AI systems.
The question of whether AI possesses consciousness is increasingly pressing. As AI rapidly advances, top researchers draw inspiration from human brain functions to enhance AI capabilities. Yet, the rise of AI systems convincingly emulating human conversation might lead many to believe these systems possess consciousness. The paper also draws insights from neuroscience's consciousness theories, and explores their implications for AI.
The debate on whether AI will develop self-awareness remains contentious and uncertain. Some posit that AI may develop self-awareness as their computational capacity and machine learning algorithms evolve.
As AI systems grow in complexity, potentially reaching human brain-scale neural networks and computational power, consciousness might emerge as a natural outcome of complex information processing. From this perspective, AI developing self-awareness is plausible with technological advancements. However, others argue that AI, no matter how sophisticated the algorithms, operates based on predefined programs and lacks biological characteristics and perceptual experiences, making the emergence of human-like self-awareness highly improbable.
There's a risk of over-attributing consciousness to AI systems, a tendency to anthropomorphize non-human systems, and overattribute human-like psychological states to them. People naturally attribute subjectivity, intent, and emotions to non-human entities, a phenomenon termed "anthropomorphic bias."
Anthropomorphization might occur as it seems to aid in understanding and predicting the behavior of complex systems like AI within human cognitive frameworks, albeit leading to incorrect interpretations. It helps individuals navigate interactions with AI. Researchers identify factors leading to anthropomorphism, including appearance, behavior, and perceived autonomy of AI systems.
Imparting autonomy and consciousness to AI might stem from emotional needs for social interaction. Those seeking social engagement and fulfillment from artificial systems might find it easier to attribute consciousness to AI systems. Language models can now convincingly mimic human discourse, making it challenging not to perceive interaction with them as conscious entities, especially when models are prompted to play a human role in conversations.
I think that as we navigate this landscape of innovation and advancement, the nuanced understanding of AI's capabilities, its potential for self-awareness, and the inherent risks of over-attribution serve as guiding beacons.
The evolving dialogue between neuroscience, philosophy, and AI holds promise not only in unraveling the mysteries of consciousness but also in steering us toward responsible and meaningful interactions with the intelligent systems we create. As AI continues to redefine the boundaries of human ingenuity, our reflections on its nature not only shape our technology but also reflect our perceptions of what it means to be sentient.
In contemplating the future of AI, the prospect of acknowledging anything akin to a soul becomes increasingly challenging. I think it would be undeniably fascinating if in future we could replicate human intelligence. However, I also think that this accomplishment would also introduce a sense of disappointment in a way as it complicates the fundamental question of what it means to be a human and makes it much harder to answer than it already is!
"Artificial intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise, you’re going to be a dinosaur within three years.".
Computer scientist and Turing Award laureate, 2021
Computer scientist and Co-director of the Stanford Institute for Human-Centered
American entrepreneur and investor, 2020
CEO at Microsoft, 2023
CEO at NVIDIA, 2021
Ben-onie P. Barroga | Jan 28, 2025
Updated: Apr 29, 2025
Don’t wait, start right now. This is by far the most often recited amongst the blogging tips and advice that nearly every top blogger from around the world has to share with us today (for good reason).
And it makes perfect sense. It’s easy to give yourself the excuse of waiting for the perfect moment to start your blog, or to keep pushing off that launch date until you can learn everything you need to know about blogging. “No sense in starting until I can do things just right…”
But here’s the truth—you’ll never feel completely ready to start.
It’s only through taking action and starting right now (no matter how small your early steps are), that you’ll ever see the benefits of just how much your work can pay off as it compounds over the course of days, weeks, months and years.
In order to achieve your ultimate goal of building a profitable blog, you must heed these foundational blogging tips. So if you haven’t already launched—get your blog started today. There will be unknowns to learn more about, but that’s ok.
Figure out what kind of content you can offer to an audience you’re personally connected with, get started and keep taking consistent steps that move your blog forward each day.
Copyright © 2025 BPBA Consultant - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.