16.07.2023 Views

PING18.1

  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.


Editor’s Desk

The

Editor’s

Desk

P.I.C.T. IEEE Newsletter Group (P.I.N.G.) is the annual technical magazine of the PICT

IEEE Student Branch (PISB), published alongside Credenz. It is a magazine of international

repute that has gained wide recognition since its inception in 2004.

With its engaging articles on current technology, P.I.N.G. aims to instill a sense

of technical awareness in its readers. It takes pleasure in having readership

from numerous colleges and professional institutes globally, thereby educating

fellow students, professors, and professionals on the most recent innovations.

This year we had the opportunity to interact with Ms. Manasi Joshi, Director of Engineering AI/

ML at Apple, and Mr. Uday Ghare, Vice President of Telecom and Media and Entertainment Business,

and Co-Founder of Maker’s Lab and M&E Practice. We would also like to thank Mr. Mukul

Kumar, Co-founder of PubMatic, for inviting us to their Pune office and interacting with us.

We would like to express our gratitude to all authors for their contributions to

P.I.N.G. 18.1. We would also like to thank our seniors for their continuous support

and guidance in making this issue a grand success. A special mention goes to

our junior team, as this issue is a testament to their persistent and diligent efforts.

Samir Hendre

Editor

Sangeeta Singh

Editor

Shreyas Chandolkar

Designer


Unraveling the techie

Prof. Madhuri Wakode

With Mr. Branch Vikrant Counselor Agarwal

Dear All,

As the Branch Counselor of PICT IEEE Student

Branch (PISB), I am delighted to introduce the latest

issue, 18.1, of our newsletter, P.I.N.G. I appreciate

the entire P.I.N.G team for their whole-hearted

efforts in releasing this issue, which provides

insights into the latest technical advancements

in engineering disciplines. P.I.N.G, a contribution

by PISB, offers everyone, including student members,

an opportunity to share their views and

ideas with the engineering community, further

strengthening professional society activities.

PICT IEEE Student Branch provides a platform

for students to showcase their talents

and gain insightful knowledge of the technical

world through various activities conducted

by its members. At PISB, we hold events throughout the year, including Credenz, Credenz

Tech Dayz (CTD), a national-level coding contest, NTH, and Algostrike, which are widely appreciated

by students, academicians, and industry professionals. We, at PISB, are committed

to involving students in their technical interests and further strengthening IEEE activities.

PISB received the Outstanding Student Branch 2021 award from IEEE Pune Section, Outstanding

Volunteer awards for two students, and student volunteers were also appreciated

for their involvement in IEEE Pune Section activities. The branch also received the

Membership Development Award in 2022. These achievements rightly reflect the active

involvement of our student members. I would also like to acknowledge the strong

support from Mr. R. S. Kothavale, Managing Trustee of SCTR; Mr. Swastik Sirsikar, Secretary

of SCTR; Dr. P. T. Kulkarni, Director of PICT; and Dr. S. T. Gandhe, Principal of PICT.

Along with responsibilities, it is an honor to serve as a Branch Counselor for PISB. It is an interesting,

valuable, and great learning experience to work with enthusiastic student members at

PISB. I am grateful to all the members of the PICT IEEE Student Branch for their active support.

Sincerely,

Prof. Madhuri Wakode,

Branch Counselor, PICT IEEE Student Branch

JUNE 2023 ISSUE 18.1

3

CREDENZ.IN


Flashback

Unraveling the techie

Sachin Johnson Chirayath, Ex-Editor, P.I.N.G

With Mr. Vikrant Agarwal

Nostalgia

I was introduced to the whole process of publishing

a technical magazine and it made me realize

how much I had grown in a short time in various

aspects that students could grow by the end of

their college life.

Sachin Johnson Chirayath,

Ex-Editor, P.I.N.G

simply start smiling from ear to ear when I

I reminisce about my time at P.I.N.G. An awesome

team of brilliant editors, creative designers,

and energetic juniors gifted me with the most

wonderful experiences and learnings that I gladly

look back on today.

Joining P.I.N.G. as a timid second-year student

after being introduced to IEEE, PISB, and Credenz

was a no-brainer. Apart from the potential

opportunity of getting to interview or even be in

the same room with seasoned professionals in the

technology domain, a genuine curiosity towards

being in the know about the STEM field and its

applications and being comfortable with reading

and writing literature made the decision quite

easy for me.

Two months away from Credenz ‘18, what I thought

would just be a usual editing sprint of articles

which was my general idea of being a junior in

an editorial team, turned out to be a rollercoaster

ride of brainstorming ideas for a new section,

the Editorial piece, contacting a lot of people for

articles, flashback, interview and writing emails.

The release of P.I.N.G. 14.1 was an addictive feeling

as it made us want to go through the grind of leading

a team and publishing it ourselves. This motivated

me and my fellow like-minded editors in the

team to make sure that this platform for students,

people in academia, and professionals created a

bigger impact than it already had. Our seniors ensured

that the mindset to grow the magazine in

ways anybody could ever imagine and set a good

precedent for the successors was instilled in us.

We understood the responsibility that had

been entrusted to us and the rapport we

shared with our juniors made it easier for us

to carry forward the legacy of being the identity

of our student chapter internationally.

The work had begun for Credenz ‘19 and as tradition

follows, we introduced two new sections in

P.I.N.G. 15.1. The Tribute section and the Alumnus

Section reflected our team’s passion for big contributions

in Science and the immense respect we

had towards our seniors who made the best out of

their education in the same environment that we

were in and brought them into the limelight. The

experience of interviewing the top management

of two amazing organizations was a cherry on top

of the work we had put in to see that our magazine

reflected and upheld all the values of P.I.N.G.

for which this team was created by our seniors.

Seeing the colored printed pages which was

the end product of sleepless nights, creative

blocks, tiring bouts of managing different areas

of the magazine, and mentoring the juniors

had given me and the Editorial team the kind

of satisfaction that almost brought us to tears.

The work ethic followed to produce P.I.N.G. 14.1 &

15.1 has been ingrained in me and looking back

I feel it was a simulation of the professional

world scenarios where you have to deal with

JUNE 2023 ISSUE 18.1

4

CREDENZ.IN


Unraveling the techie

With Mr. Vikrant Agarwal

multiple stakeholders, manage people, be open

to learning new tools & skills, be accountable

and stick to the vision. The all-rounded grind that

you go through while publishing a magazine like

this one prepares you for the worst-case scenarios

and helps you deal with uncertainty calmly.

individuals with the same drive and ambition

might put you on one hell of a ride but turn you

into an individual with a ton of learnings that will

stick with you throughout your life. After all, Iron

sharpens Iron. Always get ready to be P.I.N.G’d!

I still make sure not to miss out on the Issues published

by the team every year and it makes me

glad to see the juniors pour out their creativity and

make every Issue a unique one and ensure that

it stays true to its purpose of being the arena for

like-minded individuals to come together as one

team and help each other grow and produce something

bigger every year and continue the legacy.

The biggest takeaway from being a part of

P.I.N.G would be that being open to different

possibilities by making sure that you aren’t siloed

from what’s happening outside of your comfort

zone and surrounding yourself with like-minded

JUNE 2023 ISSUE 18.1

5

Sachin Johnson is currently working as a Software

Engineer and was Editor for P.I.N.G. 14.1 and 15.1.

- The Editorial Board

CREDENZ.IN


AI vs. Machine Learning

What’s the Difference?

Unraveling the techie

With Mr. Vikrant Agarwal

Philomath

How Is Artificial Intelligence different from

Machine Learning?

In the era of newly emerging tools, like ChatGPT,

Stable Diffusion and GPT-4, artificial intelligence

has moved more out of science fiction and

entered the lives of common people. AI has been a

buzzword for a couple of years now, but its inclusion

in nearly every task has grown exponential in the

past two years, ranging from protein folding to

voice mimicking. Fascinated by science fiction and

those wonderful generative AI tools, developers

and enthusiasts are showing keen interest in

learning more about artificial intelligence and

machine learning. Being aware of the largeness

of artificial intelligence is necessary to understand

the pros and cons of expert ML systems which are

specifically designed to perform a certain task.

Artificial intelligence is a much broader concept

and associating it with statistical models leads to

the dilution of a more general concept. Limiting

our scope to supervised and unsupervised models,

we would understand what intelligence means

and it is different from ‘data-based’ predictions. We

would start by understanding the need of machine

learning models and they can help us achieve true

artificial intelligence or generalized intelligence.

Why do we need ML models?

Consider a simple problem of house-price

prediction where the goal is to estimate the price of

a house given some parameters, like no. of rooms,

location, area, infrastructure, medical facilities etc.

Also, we can have a dataset that contains samples

i.e., features of the house and its expected price.

Without using any statistical technique, if an

individual is asked to predict the price of a house

(which is not present in the dataset) just by looking

at the dataset, what would the approach be?

Our brain would quickly start finding patterns in

the data, by observing each feature, like no. of

rooms, and checking its influence on the target

variable i.e. price of the house. After checking the

influence of each feature on the target variable,

we need to prioritize or weigh the features according

to their influence. For instance, the number of

hospitals could have the highest priority but how

do we assert the validity of this assumption? Here

comes the role of statistics, the science of analysing

data, where a suitable metric will be used to

assert the say of a feature in the target variable. In

terms of optimization, we would only choose the

features that have the maximum the influence on

the target variable. For here onwards, it is easy to

model the given problem of house-price prediction,

as an optimization problem.

The optimality of the solution ensures that

the appropriate features are picked. This process

of picking the right features is called

feature extraction. Note, feature extraction

is highly dependent on data preprocessing

but is not in the scope of our discussion.

We can form a mathematical model which decides

the price of the house given an underlying

rule and a set of assumptions. Our model

closely resembles the regression model which

in terms of geometry, tries to fit a hyperplane in

the sample space. Decision trees are also smart

feature extractors where the decision rules are

constructed by the means of information entropy.

If the value of a feature changes to a greater

extent, then it is more informative and can

be a top-level decision in the tree of ‘decisions.

Neural networks, generalize both these techniques,

and have a parameterized feature extractor

which builds up from layers of neurons.The

processing of extracting features from large datasets

gives ML models the power to make smart data-driven

decisions. We need statistical techniques

to develop models that can make decisions considering

the data given to them. ML models are

systems that use statistical concepts to extract features

from given data and not possess intelligence

as they lack skill acquisition and generalizability.

JUNE 2023 ISSUE 18.1

6

CREDENZ.IN


Unraveling the techie

With Mr. Vikrant Agarwal

How do we define intelligence?

There have been different views on how

intelligence should be defined and a universally

accepted definition is still absent. Broadly,

intelligence is defined as the ability to generalize

across various tasks/problems without the

intervention of explicit learning. An intelligent

system is expected to solve unseen problems

with its prior knowledge by acquiring new skills.

There are two divergent notions by which

we can understand what intelligence is, to

concretize a definition in our minds. The first,

defines intelligence as a collection of taskspecific

skills and the second, as an ability

to generalize learning across multiple tasks.

Intelligence as ‘Collection of Task

Specific procedures’

Imagine all possible tasks that your brain is

performing while reading this text. There is

coordination between the eyes, informationstorage

mechanism and the text-processing part

of the brain that leads to the sequential execution

of all those tasks. At a high-level, say level 0, these

tasks are ‘read’, ‘understand’ and ‘store’. If we provide

a finer description, say level 1, the tasks would be

‘read’, ‘associate text with existing knowledge’,

‘identify author’s style’ and ‘remember only if I’m

liking this text’. Even at this level, the description

of these tasks isn’t specific in terms of computing

and they only hold abstract significance. How

about increasing the level and providing more

finer explanations, until we reach a level ‘N’ where

these tasks are computable theoretically and have

been escorted from their abstractness?

In this manner, we will be able to perform a general

task of ‘reading’ with the technology we have

today. This notion of intelligence, as a complete

breakdown of high-level tasks into the most

fundamental bits of computation, is promising and

JUNE 2023 ISSUE 18.1

7

assumes a more practical approach of bottom-up

construction. In order to solve the problem of intelligence,

our goal is to encode all human tasks

into machine-understandable procedures that

work in synchrony just as they do in our brain.

Intelligence as ‘Generalized Learning

Ability’

Alan Turing expressed a notion of intelligence, in

which machine would acquire skills just as the human

brain does in its growth. It emphasizes more

on the idea that the human brain is a blank canvas,

which can implicitly train itself to perform any task.

Presumably the child brain is something like

a notebook as one buys it from the stationers.

Rather little mechanism, and lots of blank sheets.

- Alan Turing, 1950

If we wish to achieve generalized intelligence, we

need to design a machine that can learn how to

perform tasks without explicitly telling it how to

do it. Given these views on intelligence, it is easier

to view ML models as a narrow reduction of

artificial intelligence as ML models possess generality

only in one task or they cannot extend

their generality without external supervision.

ML systems and Intelligence

Throughout the history of ML, systems or models

have been developed to solve very specific/

narrow problems. Approaches developed for a

particular problem are studied well and then applied

to other domains of data to achieve generality

across a collection of tasks. For instance, the

Transformer architecture developed for language

tasks, soon found a way to vision tasks, leading to

the development of Vision Transformers. Also,

Transformers have been experimented with various

other types of data, like video, conversions,

audio and graphs. These developments may convince

the reader that a single model or system has

the capability to generalize across different tasks

CREDENZ.IN


Unraveling the techie

With Mr. Vikrant Agarwal

but it simply refers to multi-modality.

Multi-modal systems, like the recent GPT-4, work

on multiple kinds of data and sculpts a good

candidate for true generalized intelligence.

Multimodal systems, though, lack skill-acquisition,

which is the ability to acquire new skills without

external supervision. GPT-4 and other large ML

models can only perform specific tasks, such

as natural language generation, and cannot

apply their knowledge in other tasks. Generative

models like Stable Diffusion, Midjourney and

DALLE cannot produce text written in their

images, as they have been trained to produce

images as a whole and not as composition.

Conclusion:

Machine learning and AI are different implications

altogether. Artificial intelligence is a much

broader term and its use should be limited

considering the ideality of the concept. Machine

learning is the closest framework that could help

humanity achieve AI, the differences between

them that exist now should be understood.

Some of us may agree to the fact that achieving

task-specific intelligence will take us nearer

to the goal of generalized intelligence, but

this would happen only when the number

of tasks is predefined. Achieving generality

across unknown tasks is the key to generalized

JUNE 2023 ISSUE 18.1

8

- Shubham Panchal

Pune Institute of Computer Technology

CREDENZ.IN


Unraveling In Conversation the techie With

Podcast

With Mr. Mukul Vikrant Kumar Agarwal

Mr. Mukul Kumar,

Co-Founder and President of Engineering,

PubMatic.

We at PISB - PICT IEEE Student Branch are proud to

welcome PubMatic as the Title Sponsor for Credenz

2023. PubMatic has been leading the digital

advertising industry since 2006, building cutting

edge ad technology for their customers. It is our

privilege to converse with Mr. Mukul Kumar, Co-

Founder and President of Engineering at PubMatic.

It is common knowledge that starting a

Q company from scratch involves a large planning

phase. What led you to co-found PubMatic

in 2006 and could you please elaborate on the

initial foundations the company was built on?

PubMatic was founded 17 years ago with

A the vision that data-driven decisions

would be the future of advertising. I met the

other two co-founders, Amar and Rajeev in

2006 and agreed that we could help publishers

optimize ads to generate more revenue.

We actually wrote it on a paper napkin. We met in

Mumbai and got right on it after coming to Pune.

We started building the product and it was quite

exciting as this was a new domain. We worked on

the idea in my three-bedroom house and came

up with an alpha version in two and a half months

and in late February 2007, we launched and

started running ads. I remember looking at the log

files when we had our thousandth ad impression. I

remember seeing the thousandth ad on my laptop

screen and today we are serving more than 500

billion ads a day so that’s 500 million times larger

scale than we did 17 years ago. And it has been great

building this great product and this great dream.

QAdvertisements are at the very core of any

marketing campaign. While we see countless

ads every day, many of us don’t have a clue as to

what is going on behind the scenes in terms of

the ad placement, the psychology that goes into

it, and its various technological aspects. Could you

please tell us about the basic architecture PubMatic

employs to handle such a large number of ads?

AIn technical language, there are several areas,

but the two most critical are ad targeting and

scale. The concept of ad targeting involves displaying

ads specific to the viewer. Targeted ads

require a lot of thought to determine the most

appropriate ad for each person, based on their

demographics, -- and, of course, with their consent.

This ensures that viewers are only shown

ads that interest them, rather than wasting their

time and advertiser money on irrelevant ads.

The second area is scale, which requires a

significant amount of software development.

This is not just about adding more hardware

to handle an increasing volume of data; it

involves building software that can handle

multiple petabytes of data every day. We process

this data and impressions on a massive scale.

The third area is performance. We serve ads within

120 to 180 milliseconds, a time so quick that the

human eye cannot detect the process. Behind the

scenes, an auction takes place where the page

sends a request to PubMatic, and we send it to

JUNE 2023 ISSUE 18.1

9

CREDENZ.IN


In Conversation With

Unraveling the techie

With Mr. Mukul Kumar

With Mr. Vikrant Agarwal

someone else, and the most appropriate ad that

wins the auction is displayed. Whenever code

is written, it is essential to consider the milliseconds

or microseconds that it adds to this process.

QNext, let us get into the various products

offered by PubMatic. Could you please tell

us about the various solutions offered by Pub-

Matic and the markets/clients they service?

AWe offer multiple products, from the PubMatic

SSP to Connect, which is our audience product.

We also have Identity Hub and OpenWrap for

our publishers. PubMatic has two kinds of customers:

publishers and buyers.

For the buyer side, we have PubMatic SSP,

Connect and a new product called Activate. This

is a groundbreaking new end-to-end supply

path optimization solution that allows buyers to

execute non-bidded direct deals on PubMatic’s

programmatic platform, accessing premium video

and CTV inventory at scale. Initial launch partners

for Activate include dentsu, FuboTV, GroupM,

Havas, LG, Mars, and Omnicom Media Group

Germany. We are serving the fastest-growing

field of video advertising, which is CTV, OTT and

online video. Two significant customers in India

are ZEE5 and VOOT, and we also have hundreds

of customers around the world. inventory

more effectively. OTT.We operate in multiple

countries across North America, Europe, and Asia.

QAs the Co-Founder and President of

Engineering at PubMatic, can you talk about

your role and responsibilities within the company?

AMy role at PubMatic has evolved over the

years. Initially, ours was a startup in Pune until

our first employee joined us in April 2007. So, I was

responsible for a wide range of tasks, including

setting up data centers, assisting customers with

installations and connections, and even installing

the internet in our office. Currently, as a Co-Founder

and President of Engineering at PubMatic, my

role encompasses research and development,

engineering, and innovation. I oversee more

than 380 employees that comprise of PubMatic’s

engineering, data and analytics and data center

teams. We are proud of the infrastructure we have

built, which we own and maintain in-house. We

are building our infrastructure to ensure successful

business outcomes and a robust future for our

customers and our company. Our infrastructure

is also built to help safeguard a responsible,

sustainable future for society. My team led the

development of our data center strategy that is

now 100% powered by renewable energy. With

12 data centers located globally, all 500 billion ad

impressions that are served from our data centers. I

am passionate about innovation and exploring new

areas in technology, architecture, and developing

high-scale and high-performance systems.

QThe technology landscape has evolved drastically

in the past decade or so. New technologies

are popping up every few months and tech

companies are at the forefront of this evolution.

How has PubMatic evolved since its founding, and

how has the rapidly evolving technology scenario

affected innovation and research at PubMatic?

AThere’s a lot happening in ad tech, and one of

the things I keep talking about is that this is

a space where technology changes very fast. This

is unlike other areas, where new developments

might take years. Every three to six months, we

hear about new technologies or approaches, so

we must keep evolving.

At PubMatic, we’re at the forefront of data analytics.

For example, we use Spark because there are

constantly new developments happening with

Spark. For data warehousing, we use technologies

like Snowflake. Every three to six months, we hear

about new technologies or approaches, so we must

keep evolving. At PubMatic, we’re at the forefront

of data analytics. For example, we use Spark

because there are constantly new developments

happening with Spark.

For data warehousing, we use technologies like

JUNE 2023 ISSUE 18.1

10

CREDENZ.IN


Unraveling In Conversation the techie With

With Mr. Mukul Vikrant Kumar Agarwal

Snowflake. We’re constantly building software with

an eye toward the evolution in the industry, such

in areas like user privacy. Ensuring users’ consent

and our privacy compliance is our number one

priority.

We’re also thinking ahead and ensuring our

customers and team benefit from future-proofed

technology solutions. For example, Google

announced that Chrome will no longer support

third-party cookies. So, we’re working with multiple

third-party entities, such as ID providers and data

partners to develop innovative solutions to help

advertisers engage consumers. Our team is also

analyzing new technologies in new areas. In fact,

we had a session last week about new generative

AI techniques that the teams might use. There are

lots of exciting and new things happening every

day.

QCan you discuss PubMatic’s company culture

and how it fosters innovation and collaboration

within the teams? Also, following

that, could you shed light on any exciting projects

or initiatives that PubMatic is currently

working on, or plans to undertake in the future?

AWe have a rich culture of teamwork at PubMatic,

and three things are particularly important

in my mind. First is our hackathon, which happens

every year and is an exciting event where engineers

get together and implement a product of

their own interest related to advertising. It’s a 36-

hour coding marathon, and many engineers even

stay overnight to code. There are assessments and

cash prizes. Our colleagues are enthusiastic about

building great products and often have out-ofthe-box

thinking, which leads to great innovations.

We also have a great culture of training, learning

and development. As part of the Engineering 10X

program, I launched our third architecture boot

camp. Our goal from this bootcamp is to build 10 new

architects & 20 additional subject matter experts

at PubMatic. The participants will undergo training

about the breadth of the product and understand

how multiple components and understand

how multiple components and products work

together and take engineering to the next level.

Some of the most exciting things we’re currently

working on are building scale and performance.

The team is thinking about how to serve a trillion

JUNE 2023 ISSUE 18.1

11

CREDENZ.IN


In Conversation With

Unraveling the techie

With Mr. Mukul Kumar

With Mr. Vikrant Agarwal

impressions a day, which is not just about

increasing an array or hash table size. It’s about

how the modules interact with each other, how

interprocess communication works, how we

think about databases, and how we can scale

through efficient investment. In the last three

years, our cost of serving and interactions has

decreased by about 50%, meaning that the

amount of hardware required to serve the

same number of ad impressions has decreased.

QIndia has seen a boom in entrepreneurial activities

in the past few years. What advice do you have

for aspiring entrepreneurs and tech professionals,

especially those who are just starting their careers?

AIn my opinion, the number-one priority is to

build and bring your product to the customer

as soon as possible, while continuously gathering

customer feedback. I have seen some people

spend too much time building their product before

getting it to the first customer. It is crucial to keep

iterating and building your product. When building

startups, there will always be naysayers who tell

you that you cannot succeed. When we started, we

spoke to multiple advertising experts, and many of

them told us that we could not do it. However, we

kept building.

QLastly, how do you see PubMatic evolving

in the next 5-10 years, and what role do you

see yourself playing in its continued success?

AAs a company, I would like to keep this

Engineering team as an R&D team. We would

like to keep innovating new technologies, to

constantly think about breaking new barriers

and doing things faster and better. I would

like the team to keep getting to new levels in

terms of how we process our ads or build our

software faster. The latest thing we are looking

at is generative AI and I’m sure there’s a lot that

can be done in this space. There are a lot of new

areas that can be explored, and tremendous

potential to continue innovating as we build the

digital advertising supply chain of the future.

I advise anyone to keep iterating and scaling their

product. When developing prototypes, there may

be some technologies or shortcuts that you can

use to reach customers faster, but you need to

constantly rebuild and improve your software or

product to be able to scale to more customers.

I remember having to change the underlying

technologies of the initial UI and rebuild the entire

ad server in a short amount of time. We rewrote

the ad server in C language as it was much more

performant than Java, which was causing crashes.

the ad server in C language as it was much more

performant than Java, which was causing crashes.

We would like to thank Mr. Mukul Kumar for taking

time out from his busy schedule to provide such intriguing

and insightful responses. We hope that our

readers found this conversation intersesting and it

opened their minds towards pursuing their goals .

- The Editorial Board

JUNE 2023 ISSUE 18.1

12

CREDENZ.IN


Philomath

Unraveling Industry 5.0 the techie

With From Mr. Machines Vikrant to Agarwal

Humanity

The inception of industries in the world started

with the Industrial Revolution around 1760 in

Great Britain with the invention of the steam engine,

which gradually escalated all around the world. This

was the first transition from the handmade economy

to mechanised economy that we refer to as ‘The

Industrial Revolution’ or Industry 1.0 which gave

rise to factories paving way for urbanisation. The

invention of electricity gave rise to the second Industrial

Revolution termed as Industry 2.0 that began

around the 1870s. Birth of Computers marked

the rise of the Third Industrial Revolution i.e., Industry

3.0. This era was called the ‘Digital Revolution’.

The most recent and Industrial Revolution

that we are a part of is the Fourth Industrial

Revolution or Industry 4.0. It is characterised by

smart technologies and efficient networking.

State of the art manufacturing styles, and

automation also constitutes a major part of 4.0.

Industry 4.0 is considered a technology-driven

revolution to achieve higher efficiency and

productivity. Industry 4.0 emphasises on having

intelligent production systems leveraging IT.

However, technology practitioners began to divert

their attention to the matter of reducing manpower

as there was an increase in automation. The focus

given to the human element in this model was

relatively lesser. Also, environmentalists felt the

need to take into consideration the use of minimal

resources for production. The focus given to the

human element in this model was relatively lesser.

A study of evolution of strategic initiatives from

Industry 1.0 to 4.0 shows reducing emphasis on the

human element. This shortcoming is attempted to

be addressed in ‘Industry 5.0’. This revolution not

only aims at securing and maximising production

and technology for the increasing population

but also keeping the workers at the centre of

the process model- thus giving them deserved

importance. It aggregates the best of both, speed

and accuracy of automation and critical thinking of

humans. The Fifth Industrial Revolution (Industry

5.0) was formally announced by the European

Commission in 2021 following negotiations with

representatives from funding institutions, research

organisations, and technological companies.

Industry 5.0 is characterised by its core values

and the fact that the worker is at the centre of

the production process. Industry 5.0 urges the

organisations to look beyond personal prosperity

and growth. The core values of 5.0 can be

identified as - Human-centricity, Sustainability

and Resilience. Human Centricity refers that

manpower can be seen as an ‘investment’ and not

an ‘overhead’. This value stresses on the fact that

‘Technology is to serve people and society and

so it should not overpower people’. Sustainability

is the process of using resources minimally

thus safeguarding them for future generations.

Resilience, in simple terms means the ability to

withstand system failures. It is seen as an important

aspect of Industry 5.0 as the future technology

should be able to support critical infrastructure.

Some of the technologies that are a part of Industry

5.0 are Cobots (collaborative robots that work

in unison with humans), Internet of Everything

(IoE- refers to the intelligent linkage between four

essential components: people, processes, data,

and things. It is considered as a superset of IoT),

Digital Twins (virtual representation of any product

or a service, Edge Computing (a distributed

computing system that has widespread resources

which helps in faster accessibility of data), etc.

JUNE 2023 ISSUE 18.1

13

CREDENZ.IN


Industry 5.0 is a value-driven approach and is based on three interconnected

core pillars.

Unraveling the techie

With Mr. Vikrant Agarwal

Conclusion:

Some possible set of challenges that can be

faced will be less amount of skilled labour,

continuous adaption of new technologies,

security, understanding of values, etc. Researchers

have suggested that this revolution can also

be referred to as “Society 5.0,” which is not

only limited to manufacturing but also uses

technology to address societal issues. In a society

known as “Society 5.0,” cutting-edge technologies

are actively used in daily life, business, the

medical field, and other areas, not just for the

sake of advancement but also for everyone’s

convenience. Industry 5.0 appears morally correct

and is the need of the hour to inculcate this.

JUNE 2023 ISSUE 18.1

14

-Maitreyee Khunte

Pune Institute of Computer Technology

CREDENZ.IN


Unraveling the the Techie techie

Interview

With Ms. Mr. Manasi Vikrant Joshi Agarwal

Ms. Manasi Joshi

Director of Engineering at Apple AI/ML

Ms. Manasi Joshi is an accomplished technology

leader with extensive experience in the field of

Artificial Intelligence (AI) and Machine Learning

(ML). After Graduating from the Pune Institute

of Computer Technology, she attended the

University of Minnesota and is currently serving as

the Director of Engineering AI/ML at Apple.

She is responsible for developing and executing

the company’s AI and ML strategies across a

variety of products and services. With a passion for

innovation and a drive to push boundaries, Manasi

has played a pivotal role in developing cuttingedge

AI/ML solutions that have revolutionized the

industry.

During her tenure at Google, Manasi worked on

critical software frameworks such as Tensorflow

and Google Brain. Her expertise in this field has

earned her several accolades and awards, and

she is widely recognized as a thought leader and

influencer in the tech industry. In this capacity,

Manasi continues to lead the way in driving AI/ML

innovations that have the potential to transform

our world.

What aspects of your PICT college days do you

Q remember most fondly? Please share some

of your favorite memories with us. Following up:

How has your time at PICT influenced your career?

In 1998, I began my studies at PICT, which was

A a significant change for me, as I had always

lived and studied in Pune. Attending PICT allowed

me to encounter a diverse array of individuals and

perspectives, which helped me expand my thinking.

Some of my happiest memories from my time

at PICT involve spending time with friends both

inside and outside of class. I was part of the second

division, and our class had a vibrant energy.

I also enjoyed participating in Addiction, the Cultural

Fest, every year. From a technical standpoint,

I was a member of the core teams for Impetus

and Concepts, where I led the seminars and coled

the marketing team. When it comes to marketing,

one must have a persuasive pitch since

it involves requesting companies to invest their

funds. I also enjoyed participating in Art Circle

activities. Overall, it was a pretty interesting time.

Attending PICT did not set a specific path for

my professional career. However, since I earned

a computer degree, I knew I would pursue

work in that field. What PICT did offer was the

chance to collaborate with a varied group

of individuals from different backgrounds.

How was your time at the University of Minnesota

different from your undergraduate edu-

Q

cation? What differences did you observe, and did

your tenure at PICT give you any advantage there?

During my time at PICT, I had the opportunity

A to work with a diverse group of professors and

students who had different backgrounds and experiences

from my own. This exposure taught me

the importance of being receptive to others’ ideas

and opinions, which can shape one’s perspective

on important matters. I realized that having an

open mind can help one be more flexible and either

influence others’ opinions or modify their own.

JUNE 2023 ISSUE 18.1

15

CREDENZ.IN


Unraveling the Techie

Unraveling the techie

With Ms. Manasi Joshi

Comparing my education at PICT to that at the

University of Minnesota, I noticed a difference

in the emphasis on theoretical knowledge.

While PICT introduced us to a wide range of

topics across subjects, there was less focus

on coding or group work, except in the

fourth year. However, I believe that changes

may have been made since my time there.

Working on a project in my fourth year in a team

with Professor Narendra Karmarkar at TIFR, Pune

University, was a valuable experience. He is a

world-renowned expert in computer science

and the founder of Karmarkar’s algorithm,

which deals with optimization programming.

This project was my first introduction to

Python and working collaboratively on code

with a team. This experience came in handy

when I joined the University of Minnesota.

When did you first become interested in

Q machine learning? What piqued your interest

in the subject?

Upon completing my studies at the University

A of Minnesota, I joined Google as a software engineer,

where I spent 16 fascinating years. Initially,

I worked in the Ads Infrastructure team, which

was responsible for various aspects, such as the

visual appeal of ads and delivering personalized

ads based on user’s preferences. In 2015, I began

noticing that machine learning algorithms were

being used to recommend ads, which was a new

concept to me at the time. Google’s investment in

machine learning started in 2010, and it was quite

significant. By 2017, I felt that I had reached the

limits of my potential in the Ads team and sought

a new challenge in Google Research. This department

has contributed to many significant technologies,

such as Google Brain and TensorFlow. This

transition helped me step out of my comfort zone

and pursue my passion for mathematics, which

is fundamental to machine learning. Around

2020, I became interested in on-device machine

learning, which presents unique challenges for

implementing learning algorithms on a practical

device. In 2021, I decided to join Apple, where my

focus is primarily on on-device machine learning.

Increased responsibility brings a brand new

Q set of challenges and problems to deal with.

What are some unique challenges you face in such

a high position?

The field of AI and ML is constantly evolving

with numerous advancements in hard-

A

ware, software, and algorithms. Competition

among companies to develop more efficient and

long-lasting software is increasing. However, before

pursuing such innovations, teams should

consider the ethical and sustainable use of ML, device-specific

optimizations, and product differentiation.

This requires encouraging teams to think

critically and creatively, which can be challenging.

Additionally, the pandemic has affected office culture

and collaboration, posing further challenges

for team leaders. To overcome these challenges, it

is important to create a space where everyone can

express their opinions and work collaboratively.

Nowadays, every computer science undergrad

is encouraged to pursue a career in AI

Q

or ML. From a professional standpoint, how does

one distinguish themselves from the crowd?

The field of AI and ML is expansive and offers

A a plethora of opportunities for individuals to

contribute. There are various subfields to explore,

such as computer vision, natural language processing,

and core AI and ML theory, which have

practical applications in photography, autonomy,

accessibility, and other areas. There are also

many other fields where ML can be applied, such

as enterprise and health and wellness. In addition

to improving models, focusing on security

and privacy is equally important, especially with

ML on devices. These are fundamental aspects of

the field that require continued research and development.

It is a challenging yet exciting time,

and graduates with specialized degrees should

consider exploring these applications as the field

JUNE 2023 ISSUE 18.1

16

CREDENZ.IN


Unraveling the the Techie techie

With Ms. Mr. Manasi Vikrant Joshi Agarwal

still evolving and has yet to reach maturity.

What AI/ML advancement has impacted the

Q most people in the last ten years? What developments

can we expect in the next ten years?

AThe last decade has seen significant advancements

in AI and machine learning, including

open-source frameworks like TensorFlow and Py-

Torch, breakthroughs by DeepMind and OpenAI,

and impressive accomplishments like AlphaGo

beating a human player at Go and AlphaFold predicting

protein structures from gene sequences.

Generative adversarial networks have also been a

major innovation, enabling researchers to develop

novel models and big tech companies to create

scalable solutions in the cloud. Additionally, on-device

machine learning has become increasingly

important for data security and privacy. However,

as we move into the next decade, responsible use

of machine learning will be a critical consideration,

particularly in terms of fairness, bias, physical safety,

security, privacy, and explainability. Although new

architectures, frameworks, and hardware will likely

emerge, there is still much work to be done to interpret

and explain ML predictions and ensure that

the technology is used ethically and responsibly.

Historically, women have been subject to certain

disadvantages in the workplace that men

Q

holding similar positions didn’t have to deal with.

How have things improved, and what problems

still need to be conquered? What would be your

advice to young women in the software space?

Over the last decade, there has been a significant

effort to increase female participation

A

in the tech industry, with more girls opting to

pursue careers in this field and companies taking

steps to promote diversity in the workplace.

Many organizations have established scholarship

programs and competitions to highlight the talents

of women. However, the COVID-19 pandemic

harmed these efforts, as many women left their

jobs to care for their families. While some have

taken a career break, returning to work can be

challenging without the right motivation.

Therefore, my advice to working women is to

focus on self-motivation, as it is the most effective

form of motivation. Women should

be determined to succeed and motivated to

achieve their goals, as nothing can hold back

a person who is driven by self-motivation.

A good leader is accountable for a team’s

Q performance. Have you ever had to

make difficult choices in your role as a leader?

If so, how did you approach the situation?

When holding a leadership position in the

A tech industry, difficult decisions are inevitable.

Leaders must balance product-related concerns,

such as determining whether a feature should be

included in the next release and whether it is feasible

to implement within the given timeframe.

Such decisions require prioritization and a strong

conviction of thought, even if not everyone agrees.

Additionally, leaders must consider the human aspect

of their work. At times, team members may

not perform up to the desired standard, which can

be demotivating for others. Leaders must be prepared

to make tough decisions in such situations.

During my time at Google, a junior engineer expressed

uncertainty about their role in the team

and their intention to switch positions. This incident

offered me a new perspective on the importance

of everyone understanding the bigger picture and

how their work contributes to the overall project.

It is the leader’s responsibility to foster a positive

work environment that motivates team members

and keeps them connected to the project.

How do you see artificial intelligence and

Q human collaboration evolving in the near

future? Will AI eventually replace humans in

most areas? If so, should that be AI’s goal?

I disagree with the idea that AI will fully replace

human creativity and innovation. In-

A

stead, AI will primarily serve as a supportive tool

to enhance human efficiency and creativity. For instance,

AI can assist humans in creating music, art,

JUNE 2023 ISSUE 18.1

17

CREDENZ.IN


Unraveling the Techie

Unraveling the techie

With Ms. Manasi Joshi

choreographies, exercise routines, and other similar

fields. One area where AI can significantly

benefit humans is health and wellness. The Apple

Watch, for instance, has features like fall detection

and ECG monitoring that are potentially

life-saving and can aid people in leading better

lives. Additionally, robots powered by AI can take

over mundane tasks, freeing humans to explore

new avenues that better utilize their abilities.

Personal assistants and high-precision surgical

procedures are other examples of how AI can

help us be more productive. While doctors possess

knowledge that cannot be replicated by machines,

AI can aid them in performing precise surgeries,

which may otherwise be prone to human

error. Regarding ChatGPT, although it has benefits,

it is also a controversial topic. The model’s overconfidence

or hallucinations about its knowledge

may result in misinformation, which can be dangerous.

Therefore, researchers are actively working

to improve these generative models by increasing

their safety and reducing their inbuilt bias.

In conclusion, AI will interact with humans as a

supportive tool, leading to increased human productivity

and efficiency. I am talking from a point

of privilege, and there are certain things that we

just can’t control. The current scenario is quite sensitive,

and there is a lot of outrage. Many people

don’t have an option, and telling them to leave their

job just because their employer has the resources

to use software solutions raises plenty of ethical

questions. So, take my answer with a grain of salt.

What weaknesses in current AI technology

Q prevent us from attaining completely

autonomous machines? Following up: What do

you believe is the largest ethical issue related to

ML and AI? How do big tech companies go about

addressing these concerns?

I’ll refer back to my previous answer, which relates

to what prevents machines from being

A

fully autonomous and the dangers surrounding

aspects such as bias, safety, robustness, privacy,

and security in general. On one hand, we can celebrate

that Waymo recently finished one billion

miles of self-driving on actual roads and many

billion miles of self-driving in simulations. However,

does this mean those self-driving cars are now

universally available? Not at all. I recently visited

Pune in December and was at the intersection of

Karve Road and Prabhat Road. I thought, “Let’s

get Waymo here; let’s get Cruise. If they can solve

this, if they can drive in Appa Balwant Chowk, then

I will say it’s okay to use this technology safely.

Even big tech right now isn’t universally available

and isn’t considering the diversity of factors as

much as it should. There is a lot of attention given

to the fact that our data is not representative,

so it only works in certain contexts. Therefore,

many models and applications of the models are

not released worldwide. They are only available

in limited contexts because people are aware

that they do not work outside of those contexts.

You also asked about the ethical issues, and there

are certainly questions about them. For example,

ChatGPT is now leading to AI plagiarism in education,

and we have yet to determine an answer

to that. It is very difficult to discern between a

response coming from ChatGPT and an actual

human. Another ethical issue that arose was

with the COVID-19 studies conducted across the

world to predict where it would reemerge or to

identify COVID-19 symptoms and remedies. Many

scandals resulted from the vaccination, which

was based on certain models that were limited

in their capacity but used much broader than

they should have been used to predict where to

send more supplies of vaccines and to conclude

in which health conditions COVID-19 occurs.

Based on my personal experience working at

Google and Apple, I can attest to how these large

tech companies address concerns regarding the

responsible use of AI and privacy. In 2019, Google

established AI principles that prioritize the

responsible use of AI and implemented the sparrow

model, which allows for course correction by

JUNE 2023 ISSUE 18.1

18

CREDENZ.IN


Unraveling the the Techie techie

With Ms. Mr. Manasi Vikrant Joshi Agarwal

incorporating human intelligence to identify

offensive language or errors. At Apple, privacy

is a fundamental value and a major consideration

when dealing with large AI models.

Companies are now implementing the practice

of transparency by opening up their technology

to others, inviting participation in the design

process, and providing feedback loops instead

of keeping the technology in a black box.

The recent layoffs have had a significant impact

on the job market. With over 40,000 em-

Q

ployees being laid off for various reasons such as

feared recession and inflation, do you believe reducing

the workforce is the best way to deal with

these issues, or is there a better and more cost-effective

way to get through these difficult times?

I do not have a definitive explanation as to

A why the layoffs occurred, as I am not privy to

the specific circumstances surrounding the decisions.

The recent layoffs at Google were surprising

and seemed to be conducted in a seemingly

random manner. Unfortunately, I knew some

individuals who were affected by these layoffs.

While layoffs are never easy for any company,

there are situations where cost-cutting measures

are necessary and certain departments or

units may have to be dissolved. However, without

more information, it is difficult to provide

a more precise answer to this complex issue.

What is your daily routine like? What hobbies

Q do you pursue, and have they benefited you

professionally in any way?

AMy daily routine is constantly changing, but

it’s always busy and active. I am fortunate to

have a supportive husband and two kids to take

care of. As a full-time employee, most of my time

is spent at work, but I try to maintain certain activities

every day.

One of my hobbies has been pursuing Indian classical

dance (kathak) for the past 12 years in the

USA. I am blessed to have an amazing teacher,

and dancing for an hour feels like meditation to

me. It reminds me of the importance of being completely

engaged in whatever we do and controlling

our minds to focus on what’s important. This lesson

from Kathak has benefited me professionally.

Besides dancing, I enjoy any form of physical exercise,

and I recently started biking, which has

been fantastic. Additionally, I am making a conscious

effort to communicate more with my

relatives and build meaningful relationships.

Any parting words for our readers?

Q

As we wrap up this interview, I want to share

A three key points. Firstly, I encourage you to

stay curious, as it is a valuable quality to cultivate

in your career. Secondly, view feedback as

a gift instead of criticism, as it can help you address

blind spots and become a better person.

Lastly, practice gratitude and appreciate everything

you have. The pandemic has made us recognize

our privileges and allowed me to reflect on

my accomplishments and be thankful for them.

We would like to thank Ms.Manasi Joshi for taking

time out from her busy schedule to provide such

intriguing and insightful responses. We hope that

our readers found this conversation intersesting

and it opened their minds towards pursuing

their goals .

- The Editorial Board

JUNE 2023 ISSUE 18.1

19

CREDENZ.IN


ChatGPT

The AI chatbot

Unraveling the techie

With Mr. Vikrant Agarwal

Featured

There’s a new buzz word on the internet! From

the youngest of school students to the elderly,

everyone has tried their hands on this new chat

bot. For some, it’s just a gimmick, for others a

revolutionary tool. Nevertheless, this can be

considered as one of the most popularised and

available to the masses “Artificial Intelligence”

yet, and has taken the internet by storm. Since

its launch in late november 2022, ChatGPT has

seen millions of new signups, often leading to it

reaching their server’s capacity to handle the traffic!

All this is because of how incredibly friendly and

captivating the chatbot is when interacting with

the users, with a near perfect understanding of

natural language and fluency in its responses.

This chatbot can answer even the wildest of your

questions in the most subtle way possible, can

write thesis papers, talk in multiple languages,

write poems, scripts and even personalised lay-off

letters :) , thanks to the large database on which

the model has been trained. Since the release,

people from all over the world have been sharing

the stunning answers that they received from

ChatGPT, and it has blown everyone’s minds.

From hip-hop songs to writing movie stories, from

analysing python codes to writing sophisticated

descriptions of mechanisms, you name it, and

people have tried asking ChatGPT to implement

it. Moreover, it also is smart enough to understand

the context and the specific need of your prompt,

and can accordingly alter the response, like if you

ask to write a tweet, it will restrict to 280 characters

and add hashtags, it will add phrases like “Like

and Subscribe” for youtube scripts, and so on.

The company behind this is openAI, a research

laboratory and an organisation that focuses on

building safe AI and ensuring its benefits are

widely and evenly distributed. It is also behind

the famous image generation AI called DALL-E,

that creates an image from scratch based on the

prompt given. They have published multiple

research papers about AI and NLP, and have created

algorithms and trained several language models.

Unlike other AI chat bots or similar projects,

ChatGPT has not at all been a one month

wonder. The popularity and usage of it has been

consistently increasing even to this day. In March

2023, openAI released a newer, more extensively

trained version called GPT-4, which not only does

all the previous text generation tasks better, more

creative and human-like, but can also understand

image inputs. People went berserk on the internet,

sharing examples of chatGPT’s new achievements

like coding a game from scratch, or looking at a

photo of ingredients and generating a recipe for a

dish that can be made using them.

How exactly does this chatbot work?

ChatGPT, in simple words, uses neural networks

to make sense of the language and the meanings

of words, phrases etc. OpenAI, the company

behind ChatGPT, has a trained language model,

called Generative pre-trained Transformer 3 or

GPT-3, which can analyse text input and generate

from scratch another text that is similar to the

given prompt. Generative Artificial Intelligence,

as the name suggests, can generate content on

its own, unlike other types of AI that work on

the instance of the data. Generative AIs need

to be trained under semi-supervised learning,

where the model is first trained on an unlabelled

dataset and is then fine-tuned on supervised data.

Training a neural network is a sophisticated way

of saying that the model has been “taught” the

dataset fed to it, just like a human is taught how

to speak and understand over the years. The

prompts that ChatGPT can spit out come from

being trained on a giant catalogue of information

it was given from the internet. And this is the key

factor that makes ChatGPT so good: it doesn’t

search for the answer in its dataset, it generates

an answer based on the training it has received.

Use cases of ChatGPT:

Taking into consideration the amazing

conversational skills that the chatbot possesses

JUNE 2023 ISSUE 18.1

20

CREDENZ.IN


Unraveling the techie

chatGPT is the largest learned language model in the human history of artificial

intelligence, with about 175 billion ML parameters.

With Mr. Vikrant Agarwal

it seems as if ChatGPT was specifically made for

certain use cases.

The major term used to describe ChatGPT and

other AI’s is as an ‘assistant’ or a ‘tool’ for helping

humans. People might think that content creators

and writers will be obsolete after chatbots, but

they are the ones who can benefit the most from

using it. One can get an instant realisation of the

idea that they had and the response received can

be a source of inspiration, a starting point for their

future work, be it writing a script for a speech,

or a project report. You can also input a code

snippet and it can tell you what’s wrong with it.

You can have a full blown conversation with it

about something you know and it will further

educate you, just like how we talk to our peers.

Customer service chatbots can also benefit a

lot once they are trained on a similar model.

In fact, the Transformers pipeline GPT-3 is an

open-source model, and many developers have

now started to incorporate it to create their

own version of chatbots or text summarization

tools, that are specific to their project/client.

It can also be used to enhance search engines:

recently, Microsoft made a big move by making

huge investments into OpenAI, and launching

a newer version of their Bing Search engine

integrated with ChatGPT. In a bid to revive Bing,

Microsoft has integrated the AI chatbot with

the search engine, to make finding answers on

the web quicker and easier. And it has been a

major reason why many people, impressed by

how well the Bing chat browser works, switched

over from Chrome. It makes searching the web

convenient and quick. This also pushed Google

to immediately push their AI division to launch

their own version of AI chatbot, called Bard.

Where ChatGPT lacks, and its drawbacks:

It’s nowhere close to perfect though. Although it is

one of the best AI conversational tools developed

so far, it has its own set of limitations.

First and foremost being that ChatGPT has

been exposed to a finite dataset, and thus has

limited knowledge. This creates wrong answers,

factual errors, and also a biassed behaviour in its

responses, because the data it’s been trained on

can be biassed. But this is just a temporary flaw, as

newer versions with larger trained models quickly

improve the accuracy and the quality of responses.

Chatbots like ChatGPT also aren’t sentient,

they have no feelings, and lack the personal

human touch. Moreover they can often not fully

understand the context, leading to nonsensical

responses. The fact that ChatGPT is so widely

accessible also opens up the possibility of it being

misused for spreading misinformation, scamming

people, impersonating people etc.

As soon as ChatGPT was integrated with Bing,

people also started to point out how this can cause

problems to online website companies, like news

article pages. This is because such websites rely

heavily on the advertisements that they serve to

people who visit their web pages, but now that the

AI is easily reading all sites and directly compiling

an answer for the user, this revenue stream for

such websites will be affected.

More and more people will get answers directly

through this and not by visiting the original

website, thereby hampering their viewership.

And the largest problem such text-based AIs face

is the truth problem. There is no way for ChatGPT

to know whether what it is generating is True

or not. Sometimes, the responses it generates

are just wrong! And because it is so good at

generating perfect fluent sentences, it can often

be seen answering wrong made up facts with full

confidence.

Active filters also had to be developed by

the openAI team to restrict the chatbot from

answering controversial, inappropriate and

sensual questions. An unrestricted chatbot might

generate who knows how evil and inappropriate

answers, making it a big drawback.

JUNE 2023 ISSUE 18.1

21

CREDENZ.IN


The training dataset for chatGPT contains over 570 GB of text.

Unraveling the techie

With Mr. Vikrant Agarwal

There are going to be several challenges ahead

too, as AI gets closer and closer to being as good

as humans, real jobs will be lost, and outrage will

follow. There’s also the need to answer the whole

question of copyright infringement and fair use.

Active effort will have to be put by decision makers

all over the globe to ensure a seamless blend of AI

tools in our lives, without controversies, and legal

hassles.

Conclusion, Future scope of AI tools:

The journey of AI has just begun, and there’s no

end to this field. Projects like ChatGPT, DALL-E2,

Prisma etc have proved that AI isn’t really a fan

-tasy anymore, and also prove that this is just the

beginning of the applications of AI. Big players in

the tech world, like Google, Apple, Microsoft all

will soon be looking into expanding their Artificial

Intelligence teams to work on newer projects.

Nevertheless, The future of AI is exciting, and with

continuous development and innovation, we can

expect to see significant advancements that will

shape the world as we know it.

Just like how the internet became a thing within a

span of just a few years, there’s no reason to deny

that we will soon see a similar exponential growth

in applications of AI tools, in every field imaginable.

We have surely entered the AI revolution, as

companies all over the world are already looking

to incorporate AI as a feature to enhance their

products, be it smartphone cameras, or voice assistants,

or security systems. Even in fields like Agriculture

and Manufacturing, AI tools are being developed

to help with efficiency and sustainability.

JUNE 2023 ISSUE 18.1

22

- Vibhav Sahasrabudhe

Pune Institute of Computer Technology

CREDENZ.IN


Unraveling Bridging the the Rift techie

Interview

With With Mr. Mr. Vikrant Uday Ghare Agarwal

Mr. Uday Ghare is the Vice President of the

Telecom, Media, and Entertainment Business

at Tech Mahindra. He is also the co-founder of

the Maker’s Lab and M&E Practice and currently

resides in the United States of America. With 26

years of experience in IT services, he has held

various leadership, managerial, consulting, and

technical positions, including 23 years with two

large GSIs (Global System Integration) at Tech

Mahindra and Infosys. Additionally, Mr. Uday is

part of the Stanford LEAD program, where he

pursues his passions in Business and Leadership.

He has been a disruptor and changemaker in his

organization, achieving incredible milestones

throughout his journey. Without further ado, it is

my great pleasure to welcome Uday Ghare to this

virtual stage to share his insights and experiences

with us.

What do you remember most vividly from

Q your time at PICT? Please tell us about some

of your favorite memories. What differences

do you notice between college students today

and those you knew when you were a student?

AAs I look back on my time at PICT during my

undergraduate studies, one memory stands

out vividly. During my second year, we were studying

Data Structures and Algorithms with a young

lecturer from IIT Kanpur who was pursuing civil

services simultaneously. One day, he challenged

us to create a small version of WordStar, a popular

word-processing software at the time. We were

initially skeptical and thought it was an impossible

task, but he reassured us that we had the necessary

skills and knowledge to build it ourselves.

Some of us took on the challenge, and over

the semester, we worked, tirelessly to create

a reasonable product. Although it was not

marketable, it was a significant achievement for us.

It gave us our first taste of product development,

and we were proud of what we had accomplished.

Mr. Uday Ghare,

Vice President of the Telecom, Media,

and Entertainment Business at Tech Mahindra.

Co-founder of the Maker’s Lab and

M&E Practice.

In addition, we had many other fond memories

of our time at PICT, such as the annual tech

festivals where we built various solutions. Back

then, programming was very different from

what it is today. We had limited resources on our

PCs, and we had to take into account the constraints

of memory, RAM, and heap while building

programs. This conservative programming

style taught us to use garbage collection and

other techniques to still develop applications.

Today’s engineering students have an advantage

in terms of the vast amount of information

and data available to them through the Internet

and social media. However, I still believe that

the core fundamentals of computing have not

changed significantly. Although new topics like

AI and ML have emerged, we had an AI course

during my engineering studies as well. The main

difference is that students now have the ability

to build and experiment with models due to

more powerful machines and computing power.

JUNE 2023 ISSUE 18.1

23

CREDENZ.IN


Bridging the Rift

With Mr. Uday Ghare

With Mr. Vikrant Agarwal

Unraveling the techie

It is clear from your participation in business

Q programs at multiple schools that you are

enthusiastic about business and leadership education.

What leadership traits should one cultivate

throughout their career, in your opinion?

In my view, leadership is all about inspiring

A people and making tough decisions. The

most crucial aspect of being a leader is the ability

to make informed decisions. This is a difficult

task that requires a clear understanding of available

options and their potential consequences.

Leaders must choose the best course of action

based on the information available at the time.

In addition to decision-making, leaders must also

possess strong communication skills, particularly

when it comes to delivering difficult news or facing

challenging situations with employees and

customers. It takes courage to face failure and

navigate through difficult times, both internally

with employees and externally with customers.

These are some of the essential traits that I

have learned throughout my journey as a leader.

Regarding courses, I believe they offer a great

way to stay updated on the latest technologies

and management techniques. Continuous learning

is crucial to stay on top of the latest trends

and gaining knowledge from like-minded people

and professors who are experts in their fields.

That’s why I always strive to keep myself updated

with what’s happening in the world around me.

What sparked your interest in the media

and telecommunications sector?

Q

After completing my engineering degree,

A I pursued an MBA from Poona University.

During my time on campus, I was selected by a

company for their media and panel division. Due

to my unique combination of studying business

management with marketing as my major, as well

as having a background in computer engineering,

software made me well-suited for the role. As

a media analyst, I was in charge of managing operations,

including the creation and reporting of

JUNE 2023 ISSUE 18.1

24

I was selected to be a media analyst and a product

analyst. My understanding of market research

and software made me well-suited for the role. As

a media analyst, I was in charge of managing operations,

including the creation and reporting of

TRP (Television Rating Point) data. This experience

sparked my interest in the media industry, and I was

able to work with nearly 120 Indian media customers.

This was my first foray into the media industry.

As my career progressed, I then joined Mahindra

British Telecom, a leader in the telecommunications

industry. This experience introduced me to

the telecommunications sector, and I continued

to develop my skills and knowledge in both the

media and telecommunications fields.

What led you to co-found Maker’s Lab and

Q what was the initial focus of Maker’s Lab?

Every company wants to embark on innovation,

but it’s not always easy to imple-

A

ment. In 2014-2015, I was heading a large business

for a US telecom company where we were

at the cutting edge of mobile technology and

had implemented agile transformation. There

was a lot of pressure from leadership to drive innovation,

but it was difficult to tell people who

were occupied with project work to innovate.

So, I realized that we needed a dedicated team to

bring about real innovation. One of my colleagues

was coming back from the US, and he was innovative

and technically sound, so we convinced

our leadership to create a separate innovation

lab. We found an underused place in Hinjewadi

and converted it into a garage-like module,

and put 10-15 engineers dedicated to thinking

about what we can do. We gave them resources

and open-ended statements to study and

come back with ideas. That’s how we started the

Maker’s Lab, which focused on customer-driven

innovation and problem-driven innovation.

We created a demo area where every customer

who visited Tech Mahindra would visit the Maker’s

Lab, and we would demo everything that was built

CREDENZ.IN


Unraveling Bridging the the Rift techie

With With Mr. Mr. Vikrant Uday Ghare Agarwal

there. We learned from every interaction with the

customer and used the feedback to build on top of

it. Today, Maker’s Lab is a big entity and the innovation

brand of Tech Mahindra, with six Maker’s Labs

globally. We’ve also opened doors to college students

from COEP, PICT, and IITs. Overall, I consider

Maker’s Lab to be one of my career achievements.

QHow were technologies like AR/VR, IoT, and

robotics involved in the innovation process at

Maker’s Lab?

I believe the most significant learning from

A Maker’s Lab for anyone interested in innovation

can be summed up in two or three points:

Firstly, any innovation must solve a customer problem,

ensuring that someone will buy and use it.

Secondly, adoption is key. How do you ensure

people adopt your innovation?

Our focus was always on creating highly customer-focused

products that directly benefit them. For

example, we created our own chatbot, which was

refined at Maker’s Lab around 2017, when we were

exploring conversational bot technology. Today,

this bot is implemented across multiple customers,

generating revenue through service-based

management and virtual chatbot solutions.

Similarly, we developed an augmented reality

solution that assists telecom engineers in

fixing or maintaining mobile towers. The solution

enables engineers to wear a camera and

automatically receive a list of steps to follow.

We’ve also built machine learning solutions

such as the Sentiment Analyzer, which analyses

text and determines the customer’s sentiment.

In essence, these technologies and innovations

focus on problem-solving. If you intend to enter

the industry, it’s crucial to focus on adoption. This

is because people don’t easily embrace change, so

a lot of effort is required to promote technology

adoption. As I was saying earlier, the chatbot project

was one of the most interesting ones in Once

you’ve convinced an initial set of users to try the

technology, the “network effect” takes over. This is

where the primary challenge for innovators lies.

QCould you tell us about some of the exciting

projects that you have worked on at Maker’s

Lab?

AAs I was saying earlier, the chatbot project

was one of the most interesting ones. Once

you’ve convinced an initial set of users to try the

technology, the “network effect” takes over. This

is where the primary challenge for innovators lies

2016-17. Chatbots were just emerging at that

time, and we wanted to create something that was

industry-grade and very useful, which is exactly

what we did. It was selected as one of the top

50 most widely used enterprise-grade chatbots

at that time, and now TechMahindra’s HR bot is

built on the same chatbot. It was a great project.

During the COVID-19 pandemic, the Maker’s

Lab team worked closely with a pharmaceutical

company in India to create a therapeutic medicine

for COVID-19. The team identified the molecule and

simulated virtual testing of the molecule in a virtual

environment, which required a lot of effort. There

is now a tie-up underway to take it forward. This

project was purely based on data science and the

team members worked on that specific molecule.

Currently, there is a lot of focus on Quantum

Computing, and we are collaborating with several

European countries that are experimenting with

this technology. Maker’s Lab has also created

BHAML (Bharat HTML), which is designed to

enable people in rural India or other places where

English is difficult to program in their native

languages. BHAML converts the code into an HTML

webpage, eliminating the need to know English

HTML commands. BHAML has been implemented

successfully in many schools in India.

These were some of the interesting projects that

Maker’s Lab has undertaken. Although I was the

initial founder who laid the groundwork, it was the

smart, young team members who executed these

projects.

JUNE 2023 ISSUE 18.1

25

CREDENZ.IN


Bridging the Rift

Unraveling the techie

With Mr. Uday Ghare

With Mr. Vikrant Agarwal

QEveryone is aware that creating a new

company unit involves many steps. You have

undoubtedly gained profound business insights

from your experience co-founding two such units.

Would you mind explaining what happens behind

the scenes and how one plans before carrying out

such a major initiative?

I believe I have been fortunate enough to

A have led two initiatives, namely Maker’s

Lab and the Media Entertainment Business

Unit. There are a few fundamental aspects that

come into play while setting up a business

line. Firstly, having a clear vision and mission

is crucial. Being in the business, one needs to

understand the customer’s requirements, what

they are looking for, how much they are willing

to pay, and what sets them apart from others.

This leads to the creation of services and product

offerings, where a business thesis is developed

just like a startup. It is important to ensure

budget requirements, funding sources, pitching

strategies, break-even points, pricing models,

and marketing plans are all addressed. It’s an

all-around job, almost like running a startup.

Both Maker’s Lab and Media Entertainment division

followed these principles, though the objectives

differed. Maker’s Lab was focused on creating

an innovation ecosystem, with customer-driven

problem-solving being a high priority. Revenue

generation was not the main focus, but rather

creating a culture of innovation, intellectual property

creation, and working with educational institutes.

Media Entertainment, on the other hand,

focused on generating revenue and creating

a separate vertical for TechMahindra in the

industry. Nonetheless, the fundamental aspects

that I just mentioned were key to both initiatives.

What are some of the biggest challenges

facing the telecom and media in-

Q

dustry today, and how do you see these

challenges being addressed in the future?

exist. In the telecom industry, it’s all about every

G(2G,3G,4G,5G), which is a cycle of 8-10 years

to adjust to every new G that comes. So, today’s

biggest challenge for the telecom industry is to

monetize 5G, which involves a huge investment

in setting up the 5G network since the whole

design has changed. The biggest amount of

transformation or innovation is taking place in

figuring out “what is the best way to monetize

that?”.

During the 4G timeline, the telecom industry did not

make much money on the network. Unfortunately,

the companies that really benefitted from the

network were marketplace companies like Uber,

Google, and Airbnb, who needed to bring two sets of

people (buyers and sellers) together on a platform,

which happened via the network provided by the

telecom companies. However, the only return

that the telecom companies received were the

subscriptions of this network, while the big money

was made by these marketplace companies.

In 5G, Telco companies want to capture the

value of the market and provide solutions to

enterprise customers using 5G to make money

out of that.In the media industry, the biggest

transformation happened during Covid. During

Covid, the media industry had come to a

standstill as there was no way to shoot movies.

Moviemaking and theme parks were stopped, so

all the focus got shifted to streaming platforms

like Disney Plus, Zee5, and Netflix so that

people could watch these platforms at home.

And the gaming industry made a lot of money.

The media industry is more focused on how to make

the streaming business more profitable. There

are a lot of innovative ways of shooting movies,

and the movie-making processes are changing

features, the industry has completely changed.

How do you think the future of the telecom

and media industries will be shaped by new

technologies like AI, blockchain, Web 3.0, and 5G?

JUNE 2023 ISSUE 18.1

26

CREDENZ.IN


Unraveling Bridging the the Rift techie

With With Mr. Mr. Vikrant Uday Ghare Agarwal

What are some of the major trends influencing

the telecom and media sectors, and how can

businesses maintain their competitiveness in this

quickly evolving environment?

The concepts of AI, ML, and data-driven decision-making

are significantly improving and

A

are being extensively used in various industries,

such as recommendation engines. These engines

utilize data such as a customer’s purchase patterns

to determine what product or service they should

be offered or what bundle would be best for them.

By analyzing a customer’s usage of phones and the

internet, recommendation engines can cross-sell,

upsell or optimize the cost to retain the customer.

Furthermore, data science is now heavily involved

in fleet management and optimizing root

algorithms. This includes determining where

trucks should go, how they should be scheduled,

and ensuring that distribution and supply

chain management are efficiently executed.

What is your daily routine like? Are there

Q any hobbies you pursue, and have they

benefited you professionally in any way?

I work full-time on weekdays, so my schedule

is occupied with my job. However, I make

A

it a priority to set aside one hour each day to

learn something new. Whether it be watching

educational YouTube videos or reading informative

articles, I strive to gain knowledge and

understanding of the latest trends. This habit

has undoubtedly benefited me professionally.

Regarding my hobbies, I am an avid reader

and music enthusiast. I love to sing

and listen to music in my free time.

Additionally, I have a dog and enjoy taking it

out for walks. One lesson I have learned from

my hobbies is the importance of a complete

focus on the task at hand. Whether singing or

reading, it’s essential to keep your mind solely

focused on the activity. There are plenty of

distractions, but your focus and determination

will eventually resist their pull. These habits can

benefit you both professionally and personally.

Any parting words for our readers?

Q

My biggest advice for engineers is to prioritize skill

development that will give them a head start in

their careers. So, what skills are these? I teach them

in my company and call them ABCD. “A” stands for

awareness - engineers should understand and stay

up-to-date with industry trends by reading about

front-runner companies like Adobe, Google, and

others. “B” is for behavioral skills, which are essential

for working effectively in a team, managing

stress, and building good coping mechanisms.

Some people cope with stress by going to work,

while others find solace in reading or listening

to music. Regardless of the method, it’s important

to develop coping mechanisms to manage

stress. Building soft skills is just as important as

technical skills. “C” is for computing skills, which

are core to engineering. Nothing can replace a

solid foundation in computing skills. So, it’s important

to consistently practice coding and design

skills, and not take shortcuts in college. Finally,

“D” is for the domain - engineers should

understand how computing is used in different

domains and spend time building domain

understanding. This will give them a unique

perspective when they enter the job market.

The ABCD mantra will help engineers stay relevant

to their company and society, which is critical for

staying competitive and avoiding obsolescence.

We would like to thank Mr. Uday Ghare for taking

time out from his busy schedule to provide such intriguing

and insightful responses. We hope that our

readers found this conversation intersesting and it

opened their minds towards pursuing their goals .

-The Editorial Board

JUNE 2023 ISSUE 18.1

27

CREDENZ.IN


Telesurgery

A history of Medical Robotics

With Mr. Vikrant Agarwal

Unraveling the techie

Featured

It was in 1958 when ‘Robota’ finally came into existence

through the initiative of General Motors.

Their robot ‘Unimate’ acted as an aide to automobile

production. The use of robots maintains industrial

productivity and accuracy in manufacturing.

Furthermore, robots were seen as a crucial factor in

deep sea and space exploration. All that a human

cannot or may not want to do can be done by making

a robot possessing the correct functionalities.

In a dimly lit room in Bohemia, Czechoslovakia,

Joseph Capek, a painter, author, and poet caught

amidst the horrors of World War 1 worked on a

short sci-fi story Opilec predicting, in the process,

a phenomenon from the future-‘Robota’,

meaning ‘laborer’ in Czech. His brother, Karel

Capek, a journalist and writer exempted from

military service owing to severe spinal problems,

watched the war from Prague. He further

explored ‘robota’ in a futuristic realm in a play

‘Rossum’s Universal Robots’, garnering unprecedented

appeal among the masses, earning him

a Nobel Literature Nomination and the human

race a concept to develop for centuries ahead.

Robotics, to put it in a literary sense, is an integration

of physical sciences and computational mathematics

built to open fascinating possibilities, quench curiosities,

enhance the experience of our being, and

a medium to bring our wildest imaginations to life.

The World is changing all the time, and today,

breakthroughs in robotics technologies are epiphanies

to the human race. Consider 5G-aided robotic

telesurgery. For the woman caught in an

accident with serious nerve wreckage and lying

in the hospital bleeding away, for the soldier dying

in the corner of a battlefield during a crisis,

for the baby born with a hole in its heart and no

qualified doctors within physical reach, for the

rape victim fighting for her life despite the incredible

pain and emptiness of torn tissue and

organs, for the oldie in a poverty-ridden state, for

the thousands of people who lose their lives untimely

due to the absence of a qualified doctor to

aid them in time, telesurgery could be a godsend.

According to WHO statistics, 15% of the world population

still does not have access to basic healthcare,

the reasons behind which are rightly justified

as poverty, lack of quality medical education, and

lack of doctors. So when our digital screens light up

with the visuals of a state-of-the-art, sleek, white

polymer machine holding forceps and scalpels

performing complex operations, controlled by a

doctor several thousand miles away, the heart renders

a tiny wave of ecstasy and curiosity spikes up.

Surgical robots are classified as active, semi-active,

and master-slave robots. Active systems work

autonomously (under the control of a surgeon)

based on pre-programmed information. In master-slave

systems, the robot mimics every action

of the surgeon as it is, in real-time. The use of

master-slave systems is particularly important in

treating patients with contagious diseases, and

in cases where real-time intracorporeal medical

imaging of the patient is required throughout

the surgery, to give the doctor a wider field of

view inside the patient’s body; no pre-programming

is required in the case of these systems.

The semi-active systems function in an intermediate

state between active and MS systems.

The basic structure of most surgical robots consists

of a control system and a computer system

working in perfect synchronization to function as

a central nervous system of the robot. The control

system connects the master(the surgeon) to

the slave(robot). The computer systems translate

the signals to and from the control system to

the arm system, the physical component of the

robot that carries out the surgical procedures.

The arm consists of several joints and links that enable

the robot to achieve a wide range of motion

and flexibility. Some surgical robots have multiple

arms to perform multiple tasks simultaneously.

The end effectors are instruments such as forceps,

scissors, and other similar surgical equipment attached

to the end of the robot arms. A camera

system provides a high-resolution view of the

JUNE 2023 ISSUE 18.1

28

CREDENZ.IN


Unraveling the techie

Robotics Technology and Wireless Networking are the technologies used

in telesurgery.

With Mr. Vikrant Agarwal

surgical field to the surgeon. Some surgical robots

have multiple cameras that can be positioned

to provide different views of the field.

Telesurgery was first confronted by NASA in the

1970s, in an attempt to provide remote treatment

to their astronauts in space. It was after the

development of a video computer chip that laparoscopy

and minimally invasive surgery were integrated

into common surgical practice, opening

up the arena of evolution to telesurgical robots.

The Evolution of Structures for Telesurgery

One of the initial robots, AESOP(Automated Endoscopic

System for Optimal Positioning ) held

an endoscope in its hand and was designed to

function just like a human surgeon. It had wide

applications in laparoscopic treatments in relation

to gynecology and urology. A surgeon sat

at the controlling system navigating his way

through the body using the endoscope. When

the surgical site was found, the endoscope could

be held firmly at the sight in a position to maximize

the surgeon’s view so that he would only

have to focus on controlling the actions to complete

the surgery, improving surgical precision

and consequentially decreasing patient trauma.

Robodoc developed by the Integrated Surgical

Systems in the 80s was designed to assist total

hip replacement surgery. The human hip structure

is made up of the hip socket (the acetabulum, a

part of the pelvic bone) and the femur head(the

top of the thigh bone). During surgery, the femur

is separated from the acetabulum and the damaged

cartilage from the socket is cleaned. Subsequently,

a complete joint pair or aa femur head is

modeled artificially. The Robodoc reams the Acetabellum

with extremely high precision so that

the femur bone can fit perfectly into the socket

reducing the possibility of revision surgery. The

Robodoc works on a program fed by the surgeon

based on the particular patient’s requirements and

can be revised mid-procedure. Besides precision,

it absolves surgeons from the rather time-consuming

and exhausting task of bone-reaming.

BPH (Benign Prostatic Hyperplasia), wherein

the prostate gland expands to cause urological

JUNE 2023 ISSUE 18.1

29

CREDENZ.IN


SSI Mantra is a surgical robot recently made in India.

Unraveling the techie

With Mr. Vikrant Agarwal

problems, is a common condition as men get

older. The treatment for severe BPH is Transurethral

ultrasound prostate resection, by inserting

a thin tube-like tool called the resectoscope inside

the urethra, helping the doctor to see and

trim away excess tissue causing BPH. The Probot

uses ultrasound detection to accurately map the

size of the prostate instead of using a physical

detector that may damage the delicate area and

safeguards the patients from potential infections.

It was in the early 2000s when Intuitive Surgical

developed the Da Vinci system named after the

famous artist, Leonardo da Vinci, an ode to his

deeply anatomical study and sketches of the human

body(believe it or not, the man was a perfect

example of how art complements even facilitates

scientific study). The first Versio of the machine, Da

Vinci S is primarily used for single port surgery; i.e.

the entire operation is carried out through a single

incision made in the body. The next version the, Da

Vinci Si, was a mini version of the machine with a

compact structure, designed for smaller hospitals,

helping doctors to achieve much higher dexterity

precision, and control. It consisted of a patient side

cart with an entire array of robotic arms, each customized

for its own distinct function. Both of these

earlier versions were used for gynecological and

urological surgeries: hysterectomy, nephrectomy,

pelvic floor repair, myectomy, and prostatectomy.

The Xi version had a similar core structure, but

with the ceiling mounting towering above the

patient giving it a much higher coverage area.

It also had improvements in terms of 3D imaging

capabilities, a much more intuitive user

interface and the new EndoWrists technology,

another breakthrough in medical robots.

The Endowrists try to mimic or even enhance artificial

capabilities further than those of human hands

and wrists. Featured with high-definition 3D cameras,

they transmit images to the surgeon’s console

in real-time. Tactile feedback, an incredibly important

aspect of surgery to identify the force being applied

to the tissue, is provided using force sensors

located at the robotic arms’ fingertips. Position and

motion sensors enable the doctor to detect the exact

position and movement of the robots and vice

versa so that the surgery is carried out in perfect

synchronization. Today, these machines are even

used for cardiological, thoracic, and spine surgeries,

and as of 2021, were used in more than 8.5

million procedures, causing minimum blood loss

and reducing patients’ recovery times significantly.

The MAKO and Navio systems were modeled

for joint arthroplasty, a partial or complete replacement

of damaged bone joints with a prosthesis

using ’bone sensing‘ technology, especially

effective in rheumatoid and osteoarthritis.

Amidst Covid-19, thousands of Covid patients with

previous heart-related problems flooded hospitals

worldwide, hanging on to mere threads of life,

shallow breaths, and hours to spare. Nurses and

surgeons were risking their lives with this continuous

flow of affected patients, especially to perform

PCIs (Percutaneous coronary intervention),

a procedure used to treat coronary artery blockage.

Corpath, invented in 2020, eliminated the risk

factor by isolating the patients from direct contact

with surgeons, as a guided robot performed

the procedure of inserting the catheter into the

patient’s arteries to find the location of the blockage.

Once the blockage is found it can be cleared

out using an inflated stent or medical balloon.

Challenges in Robotic telesurgery:

While long-distance telerobotic laparoscopies

have been in practice since 1993, the core problems

have remained in the process of networks

and telecommunication. For a high-precision

surgery to be successful, the time delay between

the surgeon sending a signal, and the recieval of

the signal by the surgical machinery must be less

than 100ms. With a 4G network, time latency was

approximated to 0.27 seconds, but with 5G it has

now been limited to 0.01s to 0.05 (10-50 ms). With

faster network speed, higher video quality and

higher accuracy feedback will be can be obtained

JUNE 2023 ISSUE 18.1

30

CREDENZ.IN


Unraveling the techie

It is already being deployed at the Rajiv Gandhi Cancer Institute and

Research Centre in New Delhi.

With Mr. Vikrant Agarwal

at the surgeon’s end. A network-dependent

surgery also needs to ensure constant network

reliability, independent of power cuts or arbitrary

problems that may cause losses in signals.

Another possible challenge is that of network

security: notorious hacking of medical systems

puts vulnerable patients in increasingly difficult

situations. Imagine a system with low-security

protocol, and an untimely hack could complicate

an almost successful surgery with tiny inaccuracies

of motion. With the emergence of multiple

network security systems, SD-WAN(Software Data

Wide Area Network), firewall, and IDPS(intrusion

detection and prevention systems), quantum

cryptography, based on the properties of

individual photons, also finds applications in

telesurgery. In quantum cryptography, a message

is encrypted into secret keys by creating a photon

stream using a polarizer. The key is decrypted

and confirmed by the receiver. The subsequent

data can only be decrypted using this particular

quantum key. Any external eavesdropper can be

detected during the authentication process, as the

received messages will have different properties

of individual photons than those expected by

the receivers. This system is still in the works to

minimize the possibility of noise and ensure

faster encryption and decryption at both ends.

investors’ participation, ultimately resulting in

cutting down the equipment-related costs in

telesurgery. When hospitals receive the equipment

at a lower cost, the cost of treatment for the

patients should automatically drop.

In the near future, artificial intelligence trained by

experts could perform simple emergency surgeries,

audio-instructed metal bodies fixing humans,

and perhaps even open source communities

building a DoctorGPT(utopian surgical version of

ChatGPT). The ivory-colored machines, colorful 3D

imageries, and sleek metallic bodies are attractive

and perfect fascinations for the tech-savvy Gen-Z,

the greatest challenge remains to have these

systems adapt to the critical environments where

they are most needed, in war-torn environments

like Yemen, Syria, or Ukraine, to save the lives of

those injured in natural disasters, like in Turkey

and Nepal. Advancements in tech and automation

are meant to be noble innovations, with the

objective of easing the pain in human life, not that

of creating billionaires with humanoid armies that

we are both in awe and terror of.

Considering that telesurgery is a relatively new

field that requires expensive and advanced

machinery and networks, how can it possibly

help in increasing accessibility in healthcare?

Figuratively, the feasibility for the administrative

system to establish partnerships with private

players, and introduce these systems in

hospitals is much higher when compared

with that of a poor or middle-class patient to

afford the travel and long-term admittance

in a hospital far away from his/her locality.

Most modern technologies become less

expensive when multiple manufacturers come

into the picture. The administration can simply

incentivize manufacturers in the field to improve

-Aditi Bankar

Pune Institute of Computer Technology

JUNE 2023 ISSUE 18.1

31

CREDENZ.IN


Alumni of the Year

Aditya Shirole, Sahil Sharma, Shubham Chintalwar

Unraveling the techie

With Mr. Vikrant Agarwal

Novellus

Aditya Shirole ,

Co-Founder - GigIndia

Shubham Chintalwar ,

CTO - GigIndia

In recent times, the world has become full of

uncertainties. The global economy has slowed

down and we have entered a full-blown recession.

Amidst this storm, both freshers and experienced

folks are struggling to find stability in their careers

due to large-scale layoffs and hiring freezes.

Especially in the IT sector, this has become a major

trend. These companies went on hiring sprees

during the lockdown, when the IT industry was

the only sector that was booming. However, due

to the recent economic crisis caused by the Russia-

Ukraine conflict and various other factors, these

companies are being forced into cost-cutting

which unfortunately involves laying off employees.

Many have turned to gigs as a temporary solution

to this issue. Also, there is a steadily growing

employee base who solely rely on freelance for

their sustenance. There are various platforms that

connect gig workers/freelancers and businesses

such as UpWork, Fiverr, and our focus for this

article, GigIndia.

GigIndia is a technology-driven platform that

connects businesses with on-demand workforces

to help them solve a variety of tasks, ranging from

data entry to digital marketing. The platform was

founded in 2017 by Pune Institute of Computer

Technology graduates, Aditya Shirole and

Sahil Sharma. Shubham Chintalwar, also a PICT

graduate, joined them later as the third co-founder

and Chief Technical Officer.

Their journey is inspiring to us PICTian’s. They faced

numerous adversities along the way, but they kept

believing and have turned GigIndia into one of the

biggest freelancing platforms with over 1 million

registered Giggers, over 35 million Gigs, and a user

base spanning over 10 countries.

Sahil Sharma ,

Co-Founder - GigIndia

JUNE 2023 ISSUE 18.1

32

The idea for GigIndia developed when Sahil,

Aditya, and Shubham were still in college. They

noticed that many of their peers were finding

it difficult to search for suitable internship

opportunities. Many of them were struggling to

balance their internships and college academics

CREDENZ.IN


Unraveling the techie

With Mr. Vikrant Agarwal

because of various constraints such as high

travel times and the mandatory 75 percent

attendance rule imposed by the college.

This observation led to the inception of GigIndia

which was initially supposed to be a platform

that connected students with companies,

which in turn would offer internships that the

student could complete remotely, acquiring experience

and earning some money along the way.

GigIndia provides a range of tools and resources

to help gig workers improve their skills and

enhance their earning potential. These include

online training programs, job alerts, and a rewards

program that recognizes and rewards the

top-performing gig workers on the platform.

The platform also provides businesses with

various services which make their job easy.

Their journey shows us that sticking to what one

believes in is a sure shot way to become successful.

We hope that their success story awakens the

entrepreneur within some of you. Startup culture

is booming in the country, and entrepreneurs

who know how to leverage technology are set

to become major drivers behind this growing

economy. These constant and consistent efforts

will eventually lead to more Indian graduates

opting to make their careers in India rather than

going abroad, which will reduce the brain drain

that is prevalent at present. Consequently, this

transition will lead to India becoming a global

superpower on par with the US and China.

In March 2022, PhonePe, a leading digital payments

platform in India, announced its acquisition

of GigIndia. The acquisition is part of PhonePe’s

strategy to expand its platform and offer

more services to its customers. The acquisition

of GigIndia is part of PhonePe’s broader strategy

to expand its platform and offer more services

to its customers. PhonePe already offers

a range of digital payment services, including

UPI payments, bill payments, and online shopping.

With the acquisition of GigIndia, PhonePe

can now offer a wider range of services, including

freelance hiring and talent management.

In conclusion, GigIndia is a promising platform

that offers businesses an efficient and cost-effective

way to access a flexible and skilled workforce,

while also providing students and young

professionals with opportunities to gain valuable

work experience and earn money on the

side. With its cutting-edge technology and innovative

solutions, GigIndia is poised to become

a leading player in the Indian gig economy.

JUNE 2023 ISSUE 18.1

33

- The Editorial Board

CREDENZ.IN


Cloud&DevOps

Unraveling the techie

The Rise of Kubernetes

Pansophy

The term Kubernetes began to reverberate

throughout the tech sector in 2014. The very first

question that usually entered people’s minds back

then was, “How do you even pronounce it?” Fast

forward eight years, and it has grown to be among

the biggest open-source projects worldwide. With

innumerable businesses relying on this well-liked,

open-source technology, and a community of 4

million+ developers, Kubernetes today is the top

container-orchestration solution in the market. So

let’s dive deep into the architecture of Kubernetes

and understand the reason for its popularity.

Containers:

Different developers collaborating on a project

may have different work environments. A block

of code functioning correctly on one’s device

may not execute the same elsewhere. The issue of

how to get the software to run consistently when

relocated from one computing environment to

another is resolved by containers. This may involve

moving software from a developer’s laptop to a

test environment, from a staging environment to a

production environment, or even from a real system

in a data centre to a virtual machine in a private or

public cloud. However, when supporting software

environments are not the same, issues occur.

One may use Python version 2.7 for testing,

while Python 3 will be selected for actual use.

Debian Linux will be used for testing, while

Red Hat is used for production. Strange faults

and bugs can appear in such scenarios, but any

software must function regardless of the network

topology, security measures, or storage options.

A container is a package that contains the full

runtime environment, including the application,

all of its dependencies, libraries, other binaries,

and configuration files. To remove disparities in

OS distributions and supporting infrastructure,

the application platform and its dependencies

are all encapsulated into a single container.

Before the rise of containers, developers made use

of virtualization technology which passes around

the entire package bundled with the OS. The

catch is that a virtual machine with its full OS is

several gigabytes in size as opposed to a container,

which will only be tens of megabytes in size.

As a result, virtual machines cannot compete

with the number of containers that can be

hosted on a single server. Containers are lightweight,

small, modular, and easier to manage.

The abstractions provided for individual container

images make us rethink how distributed applications

are built. It allows for quicker development

by more narrowly focused, smaller teams who

are individually in charge of particular containers.

Kubernetes (K8s):

Containers are the future. They provide us with a

lot of flexibility for running cloud-native applications

on physical and virtual infrastructure. They

have become an integral part of the development

process. But being a new paradigm, it is difficult to

grasp and necessitate for apps to be designed to

fully utilize their features. Container runtime APIs

are excellent for managing single containers but

are inadequate for managing applications that may

have hundreds dispersed across numerous hosts.

For operations like scheduling, load balancing,

and distribution, containers need to be controlled

and connected to the outside world. This

is where Kubernetes comes to the rescue! A platform

for deploying, scaling, and managing containerized

applications. It manages workloads to

make sure they function as intended by the user.

This feat is accomplished using a variety of

K8s platform capabilities, such as Pods and

Services. Pods are a collection of containers

that are controlled as a single application.

TFile systems, kernel namespaces, and IP addresses

are shared by the containers inside a pod making

it easier to configure them for discoverability, observability,

horizontal scaling, and load balancing.

JUNE 2023 ISSUE 18.1

34

CREDENZ.IN


Unraveling the techie

Kubernetes was open-sourced by Google in 2014 and is now maintained by

the Cloud Native Computing Foundation (CNCF).

With Mr. Vikrant Agarwal

K8s also provides several tools to help you automate

and manage your application deployment.

All you have to do is describe your system, the

services you want to opt for, and the number of

replicas you want to create for the same deployment.

And that’s it! K8s will do all the hard work

for you and make sure that your deployments are

running as intended. K8s deployment manager

continuously monitors and updates the status of

your cluster deployment. Failures caused by unforeseen

events are quickly amended by shutting

down the deployment and creating a new one.

The rolling update strategy replaces old pods with

new ones gradually while continuing to serve clients,

maintaining 0 downtime for your application.

Advantages of K8s:

Autoscaling: Companies can scale both up and

down based on actual demand. It offers more

portability with lesser chances of vendor lock-in.

It is significantly cheaper than all its alternatives.

Major cloud providers have K8s-specific offerings.

Eg. AWS EKS, GCP GKE, etc. Multiple different

cloud deployments can also run simultaneously.

One of the most popular open-source projects.

More than 2.8 million contributions from

Google, Red Hat, VMware, and GitHub. K8s is

the future, but what does the future look like?

“More than 75% of worldwide enterprises will

operate containerized applications in production

by 2022”, according to Gartner. As a result,

Kubernetes has emerged as the de-facto standard

for running containerized, cloud-native

apps at scale. Still, the dev space sees a lot of

flashes and hot topics become irrelevant in a

matter of months. How exactly are K8s different?

K8s is part of a broader development trend toward

microservices. The future of Kubernetes is in the

abstractions that we build on top of Kubernetes

and make available to users through Custom Resource

Definitions (CRD). Just like Linux was a platform

upon which we built everything a decade or

more ago, few developers care much about it today

due to the enormous amount of abstractions

on top. More organisations will use K8s at scale,

managed K8s services will continue to expand in

scope and availability, and developers will start to

consider K8s sooner in the development lifecycle.

One of the backlashes is security. As the adoption

of K8s accelerated, so did the number of threats

targeted at its clusters, with Cryptojacking being

one of the most common attacks on clusters.

With the emergence of Web3, a collision course

has been set as the role of K8s in distributed computing

becomes somewhat obsolete, and a complete

rework of the underlying framework may be

inevitable. Shared ledger backup systems would

replace the existing database rollback systems to

elevate the security standards of K8s but at the cost

of their performance or scalability. Overall, the future

of Kubernetes looks bright, and we can expect

this technology to only continue growing in popularity.

Several exciting developments and trends

await the world of Kubernetes so keep an eye out!

-Ayush Gala

Pune Institute of Computer Technology

JUNE 2023 ISSUE 18.1

35

CREDENZ.IN


Autonomous Vehicles

Technology Advancements

Unraveling the techie

With Mr. Vikrant Agarwal

Philomath

of creating a value function and indirectly deriving

a policy from it. The novelty in PPO is that it can

use the Stochastic Gradient Descent algorithm in

reinforcement learning to train the agent.

Tesla’s Autopilot, Uber’s self-driving taxis,

and Google’s Waymo One are driving on the

streets of San Francisco. Autonomous Vehicles

or AVs are no longer something one has to

look to the future for. With tech giants like Tesla,

Nvidia, Zoox, Waymo (a Google subsidiary), and

many more investing billions of dollars into selfdriving

technologies, we will, in our lifetimes,

experience driverless travel in our own vehicles.

Autonomous vehicles, also known as self-driving

cars, are vehicles that can operate without human

intervention. They use a variety of sensors and

technologies to perceive their environment,

make decisions, and control their movements.

Autonomous vehicles are a rapidly evolving

technology that has the potential to transform

the way we live, work, and travel.One of the key

technologies that are driving the development of

autonomous vehicles is reinforcement learning.

Reinforcement learning is a type of machine

learning that involves training an agent to

make decisions based on feedback in the form

of rewards or punishments. In the context of

autonomous vehicles, reinforcement learning can

be used to teach the vehicle to navigate complex

environments and make decisions in real time.

Some of the advanced reinforcement learning

algorithms in use today are a class of algorithms

called Proximal Policy Optimization or PPO.

These are policy-based algorithms that directly

update the policy or the neural network instead

Advancements in sensor technology are also

playing a crucial role in the development of

autonomous vehicles. Sensors such as LiDAR, radar,

and cameras are used to perceive the vehicle’s

environment and detect obstacles, road signs,

and other vehicles. These sensors are becoming

increasingly sophisticated and affordable, making

it possible to build more advanced autonomous

vehicles.Autonomous driving is a technology that

has been gaining traction in India. In recent years,

several startups and multinational companies

have begun testing autonomous vehicles in India’s

cities and highways. These companies are working

to adapt their technology to India’s unique driving

conditions, which include heavy traffic, crowded

streets, and unpredictable driving behaviors.

One example of a startup that is working

on autonomous driving in India is Swayatt

Robots. The company has developed a stateof-the-art

autonomous vehicle that is currently

undergoing testing on Indian roads. The

car is equipped with a range of sensors and

cameras and uses machine learning algorithms

to perceive and navigate its environment.

In addition to private companies, universities are

also playing an important role in the development

of autonomous vehicles. Many universities have

established collaborations with automobile

companies to research and develop autonomous

driving technologies. These collaborations have

led to breakthroughs in areas such as sensor

technology, machine learning, and control systems.

One example of a university that is working

on autonomous driving is Carnegie Mellon

University. The university’s Robotics Institute

has been conducting research on autonomous

driving since the 1980s and has developed

several groundbreaking technologies in the

JUNE 2023 ISSUE 18.1

36

CREDENZ.IN


Unraveling the techie

India’s First Self-Driving Vehicle, the zpod, Unveiled by Bengaluru-based AI

Startup Minus Zero, Signaling Major Advancement in Autonomous Driving

Technology.

With Mr. Vikrant Agarwal

field. These include the NavLab, a self-driving

vehicle that was developed in the 1990s,

and the Boss, an autonomous vehicle that

won the DARPA Urban Challenge in 2007.

Open-source work is also driving innovation

in the autonomous driving industry. Several

startups and multinational companies are making

their autonomous driving technology available

as open-source software, allowing developers

to build on their technology and contribute to

the development of the industry as a whole.

One example of a company that is working on

open-source autonomous driving technology

is Baidu. The Chinese tech giant has developed

an autonomous driving platform called Apollo,

which is available as open-source software.

Apollo includes a range of technologies,

including perception, planning, and control

systems, and has been used by a number of

companies to develop autonomous vehicles.

Safety is a crucial concern in the development

of autonomous vehicles. As autonomous

vehicles become more prevalent on the roads,

it is important to ensure that they are safe

and reliable. This requires a range of safety

practices, including testing and validation,

redundant systems, and fail-safe mechanisms.

Testing and validation are key safety practices

in the development of autonomous vehicles.

Before an autonomous vehicle can be deployed

on the roads, it must undergo extensive

testing and validation to ensure that it is

safe and reliable. This testing includes both

simulated and real-world testing and involves

a range of scenarios and driving conditions.

Fail-safe mechanisms are also important

safety features in autonomous vehicles. These

mechanisms are designed to ensure that the

vehicle can come to a safe stop or take other

evasive action in the event of a system failure

or other emergency situations. Examples

of fail-safe mechanisms include emergency

braking systems and backup power systems.

Collaborations between universities and

automobile companies are also important

for ensuring the safety of autonomous

vehicles. These collaborations bring together

researchers, engineers, and other experts

to develop and test new technologies and

safety features for autonomous vehicles.

The University of Michigan and Ford Motor

Company have partnered to form the UM Ford

Centre for Autonomous Vehicles which has

led to the development of several advanced

technologies for autonomous vehicles, including

a virtual testing environment for autonomous

vehicles and a system for detecting and

responding to anomalous events on the road.

In conclusion, autonomous vehicles are a rapidly

transform the way we live, work, and travel.

Reinforcement learning and advancements in

sensor technology are driving the development

of autonomous vehicles, and collaborations

between universities and automobile companies

are ensuring that these vehicles are safe and

reliable. Open-source work by startups and

MNCs is also contributing to the growth of the

autonomous driving industry. As this technology

continues to develop, it will be important

to ensure that safety remains a top priority.

Redundant systems are also important safety

features in autonomous vehicles. Redundant

systems include backup sensors, cameras, and

control systems that can take over in the event of a

failure. These systems are critical for ensuring that

the vehicle can operate safely and reliably, even

in the event of a technical failure or malfunction.

JUNE 2023 ISSUE 18.1

37

-Manas Sewatkar

Pune Institute of Computer Technology

CREDENZ.IN


DALL.E 2

The AI Artist

With Mr. Vikrant Agarwal

Unraveling the techie

Philomath

Artificial intelligence (AI) has revolutionized

the way we live, work, and play. Of the many

areas wherein its intervention has proved to be

highly effective, AI has made significant strides

in image recognition and generation. Generative

models like GPT-3, StyleGAN, and DALL-E have

shown how AI can create realistic images from

textual prompts. DALL·E 2 is the latest generative

model that takes this ability to the next level.

DALL·E 2 is a generative model developed by OpenAI,

a leading AI research organization. It is the

successor to the original DALL-E model, which

gained fame for its ability to create unique images

from textual prompts. DALL·E 2 builds on this capability

by generating higher-quality images with

more details and better composition. The name

“DALL·E 2” is derived from a combination of two

words - “DALI” and “Pixar’s WALL-E”. The first part

of the name “Dali” is a tribute to Salvador Dali, a

renowned surrealist artist known for his vivid and

imaginative artwork. The second part of the name

“WALL-E” is a reference to the Pixar movie of the

same name.

an enormous pool of images captioned by keywords

present in the prompt. For instance, if we enter

a prompt “aurora borealis above times square”,

it will first analyze and identify parts of the textual

prompt that are semantically meaningful like places,

objects, animals, etc., in our case “aurora borealis”

and “times square”. It then creates a ‘mental

imagery’ of sorts, retaining such characteristic features

while varying the non-essential parameters.

In this way, DALL-E can generate multiple images

from a single text prompt. DALL·E 2 then refines

this sketch using StyleGAN, an image generation

network that is capable of generating highly detailed

and photorealistic images. The StyleGAN

approach involves the use of two neural networks,

a generator network, and a discriminator network,

to create high-quality images. The generator

network takes a random noise vector as input

and produces an image that matches the desired

style. The discriminator network then evaluates

the quality of the generated image and provides

feedback to the generator network, allowing it to

improve its performance over time.

Working:

DALL·E 2 uses a combination of two deep learning

models: a transformer-based language model

called GPT-3 and a generative adversarial network

(GAN) called StyleGAN. The GPT-3 model takes in

the textual description provided by the user and

generates a set of latent vectors, which are then

fed into the StyleGAN model. The StyleGAN model

then generates an image that matches the description.

The GPT (Generative Pre-trained Transformer) architecture

is trained on a massive dataset of images

and their corresponding textual ‘captions’. These

image-caption pairs are called image/text embedding.

When a textual prompt is given to DALL·E 2, it

uses its natural language processing (NLP) capabilities

to understand the meaning of the words and

phrases in the prompt. With this understanding, it

creates a rough sketch of the image by referring to

One of the key features of StyleGAN is its ability to

generate images with a high level of diversity and

complexity. This is achieved through a technique

JUNE 2023 ISSUE 18.1

38

CREDENZ.IN


Unraveling the techie

DALL·E 2 derives its name from the famous Pixar movie WALL-E, which was

released in 2008.

With Mr. Vikrant Agarwal

called adaptive instance normalization (AdaIN),

which allows the generator network to adjust the

style of the image at each layer of the network.

By adjusting the style of the image in this way,

StyleGAN can create images that vary in texture,

color, and other features. For instance, we can

further modify the text instruction specifying the

style of art like oil painting, digital art, 3D renders,

photo-realistic images, or just extremely specific

illustration types ranging from pixel art, cartoon,

cyberpunk to Van Gogh, Matisse, or Claude Monet.

complement clauses, and DALL-E can create

images that are in line with the intended meaning

of the input text.

Object Manipulation: It is capable of performing

a technique called inpainting, which allows for

the addition of new objects to existing images

while preserving style and context. The process

involves taking an original image and filling in

missing or damaged parts with new content

that matches the surrounding style and context.

Here are the different results for the prompt “An

astronaut lounging in a tropical resort in space”.

The image on the left shows a pixel art depiction

of the prompt while the one on the right is a

photorealistic image of the given description.

Capabilities:-

Apart from high quality and realistic image

generation, DALL·E 2 has several capabilities that

set it apart from other generative models. Some

of these capabilities include:

Contextual Understanding: DALL·E 2 is capable of

generating coherent images that combine multiple

complex elements into a semantically cohesive

whole. Users can input complex sentences with

JUNE 2023 ISSUE 18.1

39

Interpolation:

By employing a method called “text diffs”, DALL·E

2 can transform one image into another. This

technique involves modifying an image by making

incremental changes to it based on the differences

between the two images. This means that

it can create smooth transitions between different

objects or features in an image, resulting in a

more visually coherent and realistic final product.

Limitations :-

Despite its impressive capabilities, DALL·E 2 has

some limitations. Some of these limitations are:

CREDENZ.IN


With Mr. Vikrant Agarwal

Unraveling the techie

Limited Dataset: DALL·E 2 is trained on a limited

dataset of images and textual prompts. This

means that it may not be able to generate images

for some obscure or rare objects or concepts.

Lack of Realism: While it can generate high-quality

images, they may not always be realistic. Additionally,

in many cases, it can be seen that the model

still lacks basic coherence and compositional reasoning

that human creations would rarely lack.

Ethical Concerns: DALL·E 2, like other generative

models, raises ethical concerns about the

potential misuse of AI-generated images. It can

cause spread of disinformation, harassment,

or bullying by helping create ‘deep fakes’ or

even believable iterations of certain scenarios.

In the example given, the original image is on

the left, and the modified images on the center

and right show a new object that has been

added using the inpainting process. DALL·E 2

adapts the style of the added object to the existing

style in that part of the image. For instance,

in the second image, the corgi is made to match

the style of the painting, while in the third image,

it has a more photorealistic appearance.

to advance, it is crucial to consider the ethical implications

of its applications and use it responsibly.

It is truly exciting to think about the countless

possibilities as these models become increasingly

optimized and humans become more adept at

utilizing them as valuable tools across various endeavors.

As we continue to witness the evolution

of DALL·E 2 and its boundless potential to become

even more extraordinary, the future looks incredibly

promising.

In the example given, the original image is on the

left, and the modified images on the center and

right show a new object that has been added using

the inpainting process. DALL·E 2 adapts the

style of the added object to the existing style in

that part of the image. For instance, in the second

image, the corgi is made to match the style of the

painting, while in the third image, it has a more

photorealistic appearance.

Conclusion:

In summary, it is undeniable that DALL·E 2 is a significant

advancement in the field of generative

models. Its ability to produce visually stunning,

contextually relevant, and downright awe-inspiring

images from mere text is nothing short of remarkable.

However, it is not perfect and has some

limitations and ethical concerns. As AI continues

JUNE 2023 ISSUE 18.1

40

-Aaryan Giradkar

Pune Institute of Computer Technology

CREDENZ.IN


Philomath

Nuclear Fusion! Long been heralded as the holy

grail for humanity’s search for limitless clean

energy it lies in a realm of possibilities. It has the potential

to generate tremendous amounts of energy

while being favorable to a world that is grappling

with the effects of severe climate catastrophes.

What is Nuclear Fusion you may ask? Well, it is the

process by which two atomic nuclei are joined

together to form a heavier nucleus, resulting in

a release of vast amounts of energy. This is the

same process that powers the sun and other stars

where the intense heat and pressure of the stellar

interior allow fusion reactions to occur at an

incredible pace. It has attracted a lot of attention

because it does not produce nearly as many emissions

or radioactive waste as traditional forms of

energy like fossil fuels or nuclear fission do. Additionally,

the fuel for fusion reactions - deuterium

and tritium are present in large quantities in

seawater and can be extracted relatively easily.

Over the years, billions of dollars have been

spent on fusion research and with the recent

breakthroughs that researchers have achieved, this

technology seems closer to reality than ever before.

The first fusion reactors were fired up in the very

early years of the Cold War. The Soviet Union

opted for the Tokamak design while the Americans

opted for the process of Stellar Nucleosynthesis.

In essence, nuclear fusion really comes down

to overcoming the electromagnetic repulsion

between the two nuclei to fuse them together.

Recent breakthroughs that humanity has achieved

are largely thanks to the ITER (International

Thermonuclear Experimental Reactor) project

which is a collaboration between 35 countries

and is the largest scientific collaboration in history

with a budget of over $20 billion. According to Dr.

Michiya Shimada, deputy director-general of the

International Thermonuclear Experimental Reactor

(ITER), the world’s largest fusion project, “ITER is

really the last step before the commercialization

of fusion. ITER is designed to demonstrate the

scientific and technical feasibility of fusion

Unraveling the techie

Unleashing the Power of the Sun

With Mr. Vikrant Agarwal

Latest Advances in Nuclear Fusion

power as a practical energy source, producing

net energy over an extended period of time.

In this race for the first ones to achieve nuclear

fusion two main approaches evolved- Magnetic

Confinement and Inertial Confinement.

In Magnetic Confinement, the fuel is held in

a magnetic field and heated by a variety of

methods, including radio frequency waves and

natural beam injection. On the other hand,

Inertial Confinement initiates nuclear fusion

reactions by compressing and heating targets

filled with thermonuclear fuel. In this process,

the fuel is heated up by a high-intensity laser.

Magnetic Confinement

In 2021, China’s ‘Artificial Sun’ which is a Tokamakbased

nuclear reactor design, set a new world

record for sustained plasma operation, maintaining

a plasma temperature of over 120 million degrees

Celsius for a whopping 101 seconds. This represents

a significant milestone towards the goal of

achieving practical fusion power, as the sustained

operation is necessary for energy production.

Additionally, Physicists at the Princeton Plasma

Physics Laboratory (PPPL) have proposed that the

formation of “hills and valleys” in magnetic field

lines could be the source of sudden collapses

of heat ahead of disruptions that can damage

doughnut-shaped tokamak fusion facilities. Their

discovery could help overcome a critical challenge

facing such facilities. The research provides new

physical insights into how the plasma loses its

energy towards the wall when there are open

magnetic field lines and will help in finding

innovative ways to mitigate or avoid thermal

quenches and plasma disruptions in the future.

Inertial Confinement

In 2020, researchers at Lawrence National

Laboratory in California achieved a breakthrough

in inertial confinement, demonstrating the

highest fusion yield ever recorded in a laboratory

setting. The researchers used a novel approach

JUNE 2023 ISSUE 18.1

41

CREDENZ.IN


A newly developed tungsten-based alloy that performs well in extreme environments

similar to those in fusion reactor prototypes may help harness

fusion energy.

Unraveling the techie

With Mr. Vikrant Agarwal

called a dynamic hohlraum, which involves

using a rapidly moving x-ray source to compress

the fuel pellet. This technique allowed

for more uniform compression and higher

fusion yield than previous approaches

.

Additionally, a new facility called the National Ignition

Facility (NIF) completed its construction at

Lawrence Livermore, with the goal of achieving ignition

- the point at which the energy released by

fusion reactions exceeds the energy required to initiate

them. The NIF which houses the world’s most

powerful lasers and boasts the most energetic laser

system has already achieved fusion ignition, which

significantly brightens our future.Although fusion

research is mostly confined to funding by central

institutions, the private sector has developed

different and innovative approaches to achieve

fusion. Two notable companies operating in this

field are Helion Energy and TAE Technologies.

Helion Energy

Currently, Helion’s 6th generation machine called

Trenta has achieved and sustained 100 million

degrees celsius, the temperature they would run

a commercial reaction at. The company further reported

ion densities up to 3 × 1022 ions/m3 and

confinement times of up to 0.5 ms. It uses pulsed

magnetic fields to very high pressures to compress

the fusion plasma up to fusion conditions.

The company is responsible for pioneering the

Field Reverse Configuration to merge the plasmas.

Currently, the company is developing its

7th generation machine ‘Polaris’ which is expected

to be operational by 2024 and is expected

to increase the pulsating frequency from one

pulse every 10 minutes to one pulse every few

seconds for short periods. The 8th generation

‘Antares’ began its design phase in the Jan of

2022 and is touted to be a game changer in fusion

technology by the time it arrives in 2030.

TAE Technologies

Operating since 1998, the most interesting innovation

TAE Technologies brings to the table is its

choice of fuel. While all fusion experiments are

being pursued with hydrogen isotopes as fuel,

TAE employs Hydrogen-Boron as its fusion fuel.

Choosing Hydrogen-Boron as fuel has its unique

advantages like easier extraction and an abundant

supply. Along with this, it does not cause any radioactive

waste generation which provides it a substantial

advantage over fission energy. To achieve

confinement, TAE employs an accelerator that has

a similar technological base to the particle accelerators

found at CERN. TAE’s C2U was the first in

the world (operational 2013-2015) which achieved

and sustained plasma generation by marrying

the concepts of accelerator and particle physics

with conventional plasma and fusion physics.

TAE’s incremental steps towards a nuclear reactor

scale accelerator have resulted in the birth

of Norman (operational 2016-2022) which is a

bigger and improved version of the C2U. The ultimate

goal however is the Da Vinci which shall

be the launching pad for fusion commercialization

by 2030. It will be the first in the world to

use Hydrogen-Boron fuel and will be the first one

to deliver electricity to the grid by using fusion.

The breakthroughs are small stepping stones in

humanity’s yearning for a safe, clean, and abundant

energy source on an arduous path. These

breakthroughs provide glimmers of hope in a

dark tunnel which leads to the possible future of

fusion energy powering our homes. It is uncertain

if nuclear fusion will be the answer to it all

but the knowledge provided by its research is of

immense value nonetheless. The potential of nuclear

fusion is immense, but the road ahead is still

long and uncertain. Only time will tell whether

we can truly harness the power of the stars and

usher in a new era of clean, sustainable energy.

JUNE 2023 ISSUE 18.1

42

-Aman Upganlawar

Pune Institute of Computer Technology

CREDENZ.IN


Philomath

Centralization has been instrumental in translating

real-world systems into computer-administered

systems, leading to the widespread

adoption of computers in various fields and empowering

developers worldwide. However, as time

has passed, the flaws and issues associated with

centralized systems have become more apparent.

Centralized finance involves banks and corporations

holding your money, whose primary aim is

to generate profits. The financial system heavily

relies on intermediaries to facilitate the movement

of money between parties, each of which charges

fees for their services. Additionally, various financial

transactions, such as loan applications, can

be time-consuming and costly, while accessing

bank services while traveling may be challenging.

Decentralized finance (DeFi) eliminates intermediaries

from financial transactions by leveraging

emerging technologies. It allows people,

merchants, and businesses to transact directly

through peer-to-peer financial networks that utilize

security protocols, stablecoins, software, and

hardware advancements. These networks enable

individuals to lend, trade, and borrow from anywhere

with an internet connection. Transactions

are recorded and verified using software that

leverages distributed financial databases, accessible

across multiple locations. The databases

collect and aggregate data from all users and use

a consensus mechanism to verify transactions,

thus eliminating the need for intermediaries.

Decentralized Finance (DeFi) has become increasingly

popular among consumers due to several key

attractions, including the elimination of fees typically

charged by banks and other financial companies,

secure digital wallets for holding money, and

the ability for anyone with an internet connection

to use the system without requiring approval.

Additionally, DeFi allows for near-instantaneous

fund transfers, making financial transactions

quicker, more straightforward, and confidential.

Blockchain technology has revolutionized the fi-

Unraveling the techie

DeFi and Blockchain Revolution

With Mr. Vikrant Agarwal

Banking Without Intermediaries

nancial industry by utilizing cryptocurrencies and

decentralized control. This technology has the potential

to provide greater flexibility and deeper insights

into areas such as healthcare, food quality,

and the source of goods. Blockchain also has the

ability to offer factual identity to the 1.1 billion people

worldwide who lack documented proof of their

existence, or to the 2 million unbanked individuals

who do not have access to traditional banking

services. The World Economic Forum predicts that

by 2027, blockchain technology will add 10 percent

to the world’s Gross Domestic Product (GDP).

The term “blockchain” refers to a series of blocks

containing information. It was first introduced

by a group of researchers in 1991 to timestamp

digital documents, preventing them from being

tampered with or backdated. The concept

is similar to that of a notary, providing a secure

and tamper-proof record of digital transactions.

Although the concept of blockchain was introduced

in 1991, it remained largely unused until

2009, when Satoshi Nakamoto adapted it to create

the digital cryptocurrency, Bitcoin. A blockchain

is a distributed ledger that is open to anyone, and

once data is recorded within it, it becomes extremely

difficult to alter. The ledger consists of a

chain of records or blocks, which are shared among

all users, creating a public distributed ledger.

When a user buys cryptocurrency, they receive two

keys: a public key and a private key. The public key

serves as the user’s address, similar to an email address,

which is known to everyone in the network.

The private key, on the other hand, is a unique address

known only to the user and acts as a password.

Let’s take an example of a transaction between

two individuals, Phil and Jack. To initiate the transaction,

Phil sends the number of bitcoins he wants

to transfer to Jack’s unique wallet address, along

with his own. All this information is then hashed

using a hashing algorithm and encrypted using

encryption algorithms, along with Phil’s unique

private key. This digital signature indicates that the

JUNE 2023 ISSUE 18.1

43

CREDENZ.IN


The first recognized commercial bitcoin transaction took place in 2010 when

an individual in Florida exchanged 10,000 bitcoins, valued at approximately

$40 at the time, for two pizzas.

Unraveling the techie

With Mr. Vikrant Agarwal

transaction originated from Phil. The output is then

transmitted across the world using Jack’s public

key, and the transaction can be decrypted only

using Jack’s private key, which is known only to Jack.

This causes the hash of block 2 to change. In

turn that will make block 3 and all following

blocks invalid because they no longer

store a valid hash of the previous block.

Each block contains some data, the hash of

the block, and a hash of the previous block.

1. Data: The data that is stored inside a block depends

on the type of blockchain. The bitcoin blockchain

for example stores the details about a transaction

such as a sender, receiver, and the number of coins.

2. Hash: A block also has a hash. It identifies a block

and all of its contents and it’s always unique, just

like a fingerprint. Once a block is created, its hash

is calculated. Changing something inside the block

will cause the hash to change. If the fingerprint of

a block changes, it no longer is the same block.

3. Hash of the previous block: The third element

inside each block is the hash of the previous block.

This effectively creates a chain of blocks and it’s

this technique that makes a blockchain so secure.

Let’s take an example:

Here we have a chain of 3 blocks. Each block

has a hash and hash of the previous block. So,

block number 3 points to block number 2 and

number 2 points to number 1. The first block

of a blockchain is called the genesis block.

Let’s say you tamper with the second block.

JUNE 2023 ISSUE 18.1

44

Although using hashes can prevent tampering to

some extent, simply using hashes is not enough

to ensure security. With modern computing

power, it is possible for a malicious user to tamper

with a block and recalculate all the hashes of

other blocks to make the blockchain valid again.

To address this issue, blockchains incorporate

a mechanism called proof of work, which

slows down the creation of new blocks. This

involves miners competing to solve a complex

mathematical problem, and the first miner

to solve it gets to create the next block.

This process requires a lot of computational

power, making it difficult for an attacker to

recalculate all the hashes in the blockchain.

Proof-of-work (PoW) and proof-of-stake (PoS)

are two popular consensus mechanisms used in

blockchain technology to validate transactions.

These mechanisms help ensure the honesty of

users by incentivizing good actors and making

it difficult and costly for bad actors to engage in

fraudulent activities such as double-spending.

Proof-of-work consensus mechanism demands

a substantial amount of energy to validate

transactions. This is due to the fact that

computers on the network are required to

CREDENZ.IN


CryptoKitties, one of the pioneering blockchain games, allows users to breed

exclusive digital cats that possess one-of-a-kind characteristics, ensuring

Unraveling the techie

they cannot be duplicated.

With Mr. Vikrant Agarwal

perform numerous operations, making the

blockchain less environmentally friendly than

other systems. Additionally, centralization

is a problem, as top miners are continually

competing for rewards. As cryptocurrencies

have gained popularity, a small group of

miners has come to dominate the blockchain,

raising concerns about its centralized nature.

In the process of mining, a complex mathematical

problem needs to be solved, and the miner who

solves it first is given the opportunity to create a

new block containing a set of transactions. This

block is then broadcasted to the network of nodes,

which individually audit the existing ledger and

the new block. If the audit is successful, the new

block is added to the previous block, creating a

chronological chain of transactions. The miner is

then rewarded with bitcoins for their resources

expended in the mining process. This energyintensive

process ensures that only those who

can prove their expenditure of resources are

granted the right to append new transactions

to the blockchain, thus securing the network.

In the proof-of-stake system, validators are selected

to add a block to the blockchain based on

the amount of cryptocurrency they hold instead

of competing with each other to solve a puzzle

as in proof-of-work. The concept of “staking”

replaces the work done by miners in proof-ofwork.

This staking mechanism ensures network

security because participants must purchase and

hold the cryptocurrency to be chosen to create

a block and receive rewards. Participants must

spend money and commit financial resources

to the network, similar to how miners must

expend electricity in a proof-of-work system.

However, proof-of-work is criticized for its high

energy consumption in verifying transactions,

making it less environmentally friendly compared

to other systems. Additionally, there is a risk of

centralization as a small group of miners may

control the blockchain in a proof-of-work system.

Proof-of-stake has a major drawback in that it often

requires a significant initial investment. To qualify

as a validator, one must purchase enough of the

native cryptocurrency token, and the required

amount is dependent on the size of the network.

This can lead to a blockchain that is dominated by

wealthy individuals, as only those who can afford to

buy a network stake will be able to participate as validators.

As the value of cryptocurrencies continues

to rise, this issue may become more pronounced.

Although proof-of-work is currently the most

widely used consensus model for blockchain

networks, alternative models like proof-of-stake

offer several advantages. For example, they can

increase security, reduce energy consumption,

and enable networks to scale more effectively.

Given the significant environmental impact of

proof-of-work, it is likely that alternative models

will become more popular in the future.

Blockchain security is achieved through its unique

use of hashing and the proof of work mechanism.

In addition, blockchain networks are decentralized

and operate on a peer-to-peer basis, rather

than relying on a central authority. When someone

joins a blockchain network, they receive a

complete copy of the blockchain and can use

it to ensure the integrity of the network. Nodes

are individual computers that are connected to

the blockchain network and receive a copy of

the blockchain data. When a new block is added

to the blockchain, it is broadcast to all nodes

on the network, and each node must verify the

block to ensure that it has not been tampered

with before adding it to their copy of the blockchain.

As blockchain technology continues to develop,

it will likely continue to evolve and improve.

Smart contracts are a recent innovation in the

blockchain space. They are self-executing contracts

with the terms of the agreement between

buyer and seller being directly written into code

on the blockchain. Smart contracts allow for the

automation of the execution of an agreement or

JUNE 2023 ISSUE 18.1

45

CREDENZ.IN


Unraveling the techie

With Mr. Vikrant Agarwal

Web 3.0 is a possible future version of the internet

that is based on public blockchains. One of its key

features is decentralization, which means that

individuals themselves can own and govern sections

of the internet, rather than relying on companies

like Google, Apple, or Facebook to mediate access.

Web 3.0 will rely heavily on non-fungible tokens

(NFTs), digital currencies, and other blockchain

entities. People can earn a living by participating

in the protocol in various ways, both technical

and non-technical, while consumers of the service

pay to use the protocol, much like they would

pay a cloud provider like Amazon Web Services.

workflow, so that all participants can be certain

of the outcome without requiring an intermediary

or time loss. They operate by following a set of

predetermined conditions written into the code,

which are executed by a network of computers

when verified. These actions can include releasing

funds, registering a vehicle, sending notifications,

or issuing a ticket. The blockchain is updated

when the transaction is completed, and only parties

with granted permission can see the results.

Overall, the future looks bright for blockchainbased

products, which are rapidly flooding

the digital market and attracting the

attention of developers around the world.

The decentralization offered by blockchain

technology promises a safe, secure, and fast future.

To create a smart contract, participants must determine

how transactions and data are represented

on the blockchain, agree on the rules that

govern those transactions, explore all possible

exceptions, and define a framework for dispute

resolution. After these conditions have been met,

a developer can program the smart contract.

Have you ever heard of odometer fraud? This illegal

practice involves tampering with a car’s odometer

to make it appear newer and less worn, leading

buyers to pay more than the car is actually worth.

While the government collects mileage information

during safety inspections, it’s not enough to

prevent this fraud. One potential solution is to

replace traditional odometers with “smart” ones

that are connected to the internet and frequently

record the car’s mileage on a blockchain. This

would create a secure digital certificate for each

vehicle, allowing anyone to easily access its history.

-Shatakshi Chaudhari

Pune Institute of Computer Technology

JUNE 2023 ISSUE 18.1

46

CREDENZ.IN


Philomath

Unraveling the techie

Quantum Computing

With Mr. Vikrant Agarwal

A Revolution in Computing Technology

Quantum computing has emerged as a revolutionary

technology with the potential to

transform our world in ways that were previously

unimaginable. With its ability to process information

at speeds that far exceed those of classical

computers, quantum computing is poised to

solve some of the world’s most pressing problems

and revolutionize the way we live and work.

In simple terms, quantum computing is a new

way of processing information that uses quantum

bits, or qubits, instead of the classical bits

used in traditional computers. A qubit can exist

in multiple states simultaneously, allowing for a

vast number of computations to be performed

simultaneously. For instance, a classical computer

can process data sequentially, one bit at a

time, whereas a quantum computer can process

multiple computations simultaneously, which

significantly speeds up the processing time.

Quantum computing is booming nowadays

due to its potential to solve complex problems

that classical computers cannot handle efficiently.

With quantum computing, it is possible

to process data much faster than with classical

computers, making it a revolutionary technology

in the field of computing. For example,

quantum computing can be used to optimize

traffic flows in cities, develop personalized medicine,

and even create new materials with specific

properties. With its ability to process vast

amounts of data simultaneously, quantum computing

offers a powerful new tool for tackling

complex problems in a wide range of domains.

In addition to its speed and power, quantum

computing is also highly versatile. It is already being

used to simulate complex chemical reactions,

develop new cryptographic techniques, and optimize

supply chains, among many other applications.

The applications of quantum computing

are vast and varied, ranging from cryptography

and finance to healthcare and transportation. For

example, in the finance sector, quantum computing

can be used to analyze financial data and

make predictions based on multiple variables simultaneously,

thus improving accuracy and reducing

risk. In the healthcare sector, quantum

computing can help in the development of personalized

medicine, enabling the identification of

the most effective treatments based on an individual’s

genetic makeup.

Quantum algorithms are a set of instructions designed

to run on a quantum computer, allowing

for the efficient processing of complex problems.

Quantum error correction is the process of detecting

and correcting errors that occur during

the computation process, which is crucial for the

accuracy and reliability of quantum computing.

Quantum cryptography is a technology that uses

the principles of quantum mechanics to secure

communications, making it virtually impossible to

intercept or decode messages. Quantum simulation

is the process of using a quantum computer

to simulate complex systems, such as chemical

reactions or weather patterns that are difficult to

model using classical computers.

A glimpse of one of the very extensive applications

of Quantum computing is quantum simulation:

A simple example of quantum simulation can

be the behavior of a molecule. However, with

quantum simulation, a quantum computer can

simulate the behavior of a molecule by using

quantum algorithms to calculate the positions

and interactions of each atom in the molecule.

JUNE 2023 ISSUE 18.1

47

CREDENZ.IN


Quantum computing requires extremely cold temperatures, as sub-atomic

particles must be as close as possible to a stationary state to be measured.

Unraveling the techie

This simulation can provide valuable insights into

the behavior of the molecule, which can be used

to design better drugs and materials. For example,

in the pharmaceutical industry, quantum simulation

can be used to simulate the behavior of a

drug molecule interacting with a target protein in

the body.

A trial wavefunction is then created by applying a

set of gates to the initial state, with the gates including

variables that can be adjusted to improve

the correctness of the energy calculation.

The energy expectation value of the trial wavefunction

Is evaluated by applying the Hamiltonian

operator, which represents the total energy of

the molecule, to the trial wavefunction.

E(θ) = <Ψ(θ)|H|Ψ(θ)>

Where E is the energy, H is the Hamiltonian operator,

|Ψ(θ)>is the trial wavefunction with parameters

θ, and <Ψ(θ)|is the Hermitian conjugate of

|Ψ(θ)>.

The variables of the trial wavefunction are then

optimized using classical optimization techniques

to minimize the energy expectation value.

The optimization process is typically frequent

until the energy expectation value converges to a

minimum value.

The output of the VQE algorithm is the minimum

energy value, which represents

the ground state energy of the molecule.

In summary, the VQE algorithm is a hybrid classical-quantum

algorithm that uses quantum circuits

with adjustable parameters to simulate the

behavior of molecules. The VQE algorithm has potential

applications in a variety of fields, including

materials science, chemistry, and drug discovery.

Coming back to the root of widely useful

technologies like the above, quantum computing

is having its irreplaceable position

in the future of technology undoubtedly.

As the technology continues to mature, we can

expect to see even more powerful applications

emerge. From developing new materials with

unique properties to optimizing complex transportation

networks, quantum computing has

the potential to transform the way we live and

work in ways that were previously unimaginable.

In conclusion, quantum computing is a truly

revolutionary technology with the potential

to transform our world in ways that were previously

unimaginable. With its ability to solve

complex problems at speeds that far exceed

those of classical computers, quantum computing

is composed to become one of the most

important technologies. As the field continues

to evolve and new applications emerge,

we can look forward to a future that is brighter,

faster, and more efficient than ever before

JUNE 2023 ISSUE 18.1

48

-Anushka Joshi

Pune Institute of Computer Technology

CREDENZ.IN


Maven

The internet has become an integral part of our

lives, but it has also created new challenges

in terms of security and privacy. Cybersecurity is

a critical concern in today’s world, with cybercrime

and cyber-attacks becoming more prevalent

every day. As a result, governments worldwide

have introduced new cybersecurity laws

to protect users from potential online threats.

However, the introduction of these laws has created

a significant issue, which is the tussle between

hardened cyber laws and curious teenagers. Teenagers

are often early adopters of new technology,

and as such, they have been at the forefront of online

activity. They are also more likely to push boundaries

and experiment with new technologies and applications,

which puts them at odds with the law.

One example of this is the incident involving a

16-year-old high school student from Pennsylvania

in 2013. The student was charged with identity theft,

harassment, and other offenses related to computer

crimes for creating a parody Twitter account of

his school’s principal. The incident sparked a debate

about the use of cyber laws to regulate teenage

behaviour online and highlighted the need for

clearer guidelines and policies around online behaviour,

especially among teenagers who may not

fully understand the implications of their actions.

Another recent incident occurred in May 2021,

when a 16-year-old boy from Florida was arrested

and charged with multiple felonies after hacking

into his school’s computer system to manipulate

grades and attendance records. The student was

able to obtain a teacher’s login credentials and then

used that information to change his own grades and

those of several other students. He also manipulated

attendance records to avoid disciplinary action.

The incident underscores the need for schools

and other institutions to have strong cybersecurity

measures in place to prevent unauthorized

access and manipulation of

sensitive data, as well as the importance of educating

young people about the potential con-

Unraveling the techie

Cybersecurity and Teenagers

With Mr. Vikrant Agarwal

Tussle Between Curiosity and the Law

sequences of cybercrime and the importance

of respecting the privacy and security of others.

In addition to these incidents, there have been several

other instances where curious teenagers have

run afoul of cyber laws. One example of this tussle

between cyber laws and curious teenagers is

the case of “Cracka,” a teenage hacker who gained

notoriety in 2015 after hacking into the personal

email account of John Brennan, the then-director

of the CIA. Cracka and his group, “Crackas with

Attitude,” also targeted other high-profile individuals,

including the director of national intelligence

and the secretary of homeland security.

While Cracka and his group were eventually arrested

and charged with cybercrime, the case

highlighted the tension between the need

to enforce cyber laws and the natural curiosity

of teenagers who may not fully understand

the potential consequences of their actions.

Another example of this tussle is the case of

“Coinkiller,” a teenager who was arrested in

2018 for creating a program that could steal

cryptocurrency from others. Coinkiller’s program

was sophisticated and could evade many

cybersecurity measures, making it a significant

threat to those who use cryptocurrency.

However, Coinkiller claimed that he had created

the program to learn more about cybersecurity

and to test his skills, rather than to steal from

others. Coinkiller’s case highlights the difficulty

of distinguishing between malicious intent and

innocent curiosity when it comes to cybercrime.

Governments around the world have responded

to the increase in cybercrime by enacting

strict laws to combat it. For example, the UK government

launched the CyberFirst program in

2016, which aims to teach teenagers about cybersecurity

and encourage them to pursue careers

in the field. Similarly, the European Union

enacted the General Data Protection Regulation

(GDPR) in 2018, which is one of the most

comprehensive data privacy laws in the world.

JUNE 2023 ISSUE 18.1

49

CREDENZ.IN


With Mr. Vikrant Agarwal

Unraveling the techie

In the United States, the Children’s Online

Privacy Protection Act (COPPA) was enacted in

1998 to protect the online privacy of children

under the age of 13. COPPA requires websites

and online services to obtain parental consent

before collecting personal information

from children, and it has been successful in

protecting children from online predators.

However, these laws can sometimes have

unintended consequences. For example, some

teenagers may feel that their online privacy is

being violated by COPPA, which could lead them

to circumvent the law in order to protect their

privacy. Similarly, some teenagers may view the

strict penalties associated with cybercrime as

unfair, which could lead them to engage in illegal

activities out of a sense of rebellion or protest.

Governments need to ensure that their

cybersecurity laws are clear and concise, and

that they consider the unique circumstances

of teenagers. While it is essential to protect

users from online threats, governments

must also avoid overregulation that

could stifle innovation and creativity.

While these regulations aim to protect

users’ privacy and security, they can also

create tension with teenagers’ natural

inclination to explore and experiment online.

educators, and young people themselves all

have a role to play in creating a safer and more

responsible online community. It is essential

to strike a balance between protecting users

from potential online threats and ensuring that

teenagers have the freedom to explore and learn

online. By adopting clearer guidelines and policies

around online behaviour, educating young people

about the potential risks and consequences of

their actions, and providing adequate support and

resources for those who may be at risk of online

threats, we can ensure that the internet remains

a vibrant and innovative space for everyone to

explore and learn.

Teenagers often push boundaries and try new

technologies and applications, which can

put them at odds with the law. The incident

involving the Pennsylvania high school student’s

parody Twitter account illustrates this tension.

By encouraging responsible digital citizenship,

parents and educators can help young people

understand the importance of respecting the

privacy and security of others and being aware of

the potential consequences of their actions online.

In conclusion, the tussle between hardened cyber

laws and curious teenagers is a complex issue

that requires careful consideration and balancing

of competing interests. Governments, parents,

JUNE 2023 ISSUE 18.1

50

-Abhijeet Ingle

Pune Institute of Computer Technology

CREDENZ.IN


Technology of the year

team of researchers at Northwestern University

has created a novel bandage that

A

utilizes electrotherapy to expedite wound healing.

The bandage is characterized by its small

size, flexibility, and stretchability, which make

it a comfortable option for patients to wear.

Unraveling the techie

First transient electronic bandage

With Mr. Vikrant Agarwal

Biological Processes Involved in healing

The process of wound healing is intricate and

consists of multiple stages, involving diverse

biological processes and various types of cells.

Here’s a general overview of how wounds heal:

The bandage works by using a small, wireless

device that is powered by a battery and

is attached to the bandage. The device generates

an electric field that stimulates the cells in

the wound area, helping to promote healing.

This bandage could be a promising new approach

to wound healing, especially for chronic

wounds that are slow to heal. It could also

be useful for treating other types of injuries,

such as burns or surgical incisions. It is especially

helpful in patients having diabetes.

In the US, almost 30 million people have diabetes,

and a significant proportion of them, estimated

to be between 15 to 25%, may experience a diabetic

foot ulcer at some point in their lives. This

is due to the nerve damage caused by diabetes,

which can result in numbness, making it possible

for individuals with diabetes to overlook and

neglect small wounds like blisters and scratches.

Moreover, high blood glucose levels thicken

the walls of capillaries, which in turn causes slow

blood circulation, making it difficult for these injuries

to heal. This creates an ideal condition for a

minor injury to develop into a hazardous wound.

Hemostasis:

The first stage of wound healing is hemostasis,

which involves the formation of a blood clot to

stop the bleeding. Platelets in the blood release

chemicals that help to form a clot, and blood

vessels in the area constrict to reduce blood flow

to the wound.

Inflammation:

The second stage of wound healing is

inflammation, which involves the activation of

immune cells to remove any foreign particles

and prevent infection. Immune cells, such as

neutrophils and macrophages, migrate to the

wound site to clean up debris and release growth

factors that help to initiate the healing process.

The third phase of wound healing is known

as proliferation. During this stage, new tissue

is formed to replace the damaged tissue.

Collagen production is initiated by fibroblasts,

which aid in the reconstruction of the

extracellular matrix. Additionally, endothelial

cells generate fresh blood vessels to transport

oxygen and nutrients to the wound site.

Remodeling:

The last phase of wound healing is known as

remodeling. It comprises the maturation and

arrangement of the newly generated tissue.

Collagen fibers are realigned and cross-linked to

provide strength and stability to the wound, and

excess cells are removed through a process called

apoptosis. The healing process can take anywhere

from a few days to several weeks, depending

on the severity of the wound and the patient’s

JUNE 2023 ISSUE 18.1

51

CREDENZ.IN


In an animal study, the new bandage healed diabetic ulcers 30% faster

than in mice without the bandage.

Unraveling the techie

With Mr. Vikrant Agarwal

overall health. factors,uch as age, nutrition, and

underlying medical conditions, can also affect the

healing process.

Why Electrotherapy?

The concept of using electricity to stimulate

the process of wound healing is not a novel

idea.. Electrical stimulation has been used

in medicine for decades to treat a variety

of conditions, including bone fractures,

chronic pain, and slow-healing wounds.

However, the challenge has been to develop a

device that is small, flexible, and comfortable

for patients to wear. The development of this

bandage represents a significant step forward

in the field of electrotherapy for wound healing.

Numerous scientific journals, such as Nature

Communications, Advanced Healthcare

Materials, and Science Advances, have

published research regarding this bandage.

The next step for this technology is to

conduct further studies to determine its

safety and effectiveness in human patients.

Construction:

A smart regenerative system has been developed

that consists of two electrodes on one side - a small

flower-shaped one placed directly over the wound

bed and a ring-shaped one placed on healthy

tissue surrounding the wound. The other side of

the device features an energy-harvesting coil to

power it, and a near-field communication (NFC)

system for real-time wireless data transmission.

Additionally, sensors have been incorporated

into the device that measures the wound’s

healing progress by analyzing the resistance

of the electrical current across the wound.

this device is that while it is an electronic device,

the components that interface with the wound

bed are entirely resorbable. This means that these

materials will disappear naturally after the healing

process is complete, thus preventing any damage

to the tissue that might result from physical

extraction, according to John A. Rogers, who coled

the study at Northwestern University.

Working:

The bandage works by delivering electrotherapy

directly to the wound site using a wireless device

that is attached to the bandage. The device

generates an electric field that stimulates the cells

in the wound area, helping to promote healing.

The electric field generated by the device helps

to promote the migration of cells involved in

the healing process, such as fibroblasts and

macrophages, to the wound site. These cells are

responsible for producing the extracellular matrix

that helps to rebuild damaged tissue and facilitate

the closure of the wound.

The bandage is designed to be flexible and

stretchable, which makes it comfortable for

patients to wear. The wireless device that delivers

the electrotherapy is also small and lightweight,

which means that it can be easily integrated into

the bandage.

The bandage is still in the experimental stage, and

more research needs to be done to determine its

safety and effectiveness. However, the preliminary

results are promising, and this technology could

potentially lead to more effective and efficient

treatments for patients with chronic wounds.

Physicians can monitor the healing process

by noting a gradual decrease in current

measurement. If the current remains high,

it may indicate that there is a problem that

requires attention. One of the notable features of

JUNE 2023 ISSUE 18.1

52

CREDENZ.IN


Unraveling the techie

“Although it’s an electronic device, the active components that interface with the

wound bed are entirely resorbable,” said Northwestern’s John A. Rogers, who co-led the

study. “As such, the materials disappear naturally after the healing process is complete.

With Mr. Vikrant Agarwal

Drawbacks:

While the development of the electrotherapy

bandage is promising, there are still some potential

drawbacks and limitations to the technology

that need to be addressed. Some of these include:

Safety concerns: While the bandage is designed to

be safe for use on human skin, there is still a risk

of skin irritation or other adverse reactions. Further

testing and clinical trials will be needed to ensure

the safety of the bandage in human patients.

Cost: The wireless device that delivers the electrotherapy

is relatively expensive to produce,

which could make the bandage too costly for

some patients or healthcare systems to afford.

Conclusion:

In the future, this technology could also be

used to treat other types of injuries, such as

burns or surgical incisions. It could potentially

be integrated into other types of medical

devices, such as prosthetics, to help promote

healing and reduce the risk of infection.

Overall, the development of this bandage represents

an exciting advancement in the field of

wound healing, and it has the potential to improve

the lives of millions of people around

the world who suffer from chronic wounds.

Battery life: The wireless device that powers

the bandage has a limited battery life, so the

bandage would need to be changed regularly.

This could be inconvenient for patients and

could increase the overall cost of treatment.

Limited applications: The bandage is designed

specifically for wounds, and its use may be limited

to certain types of injuries or wounds. Additional

research is necessary to establish the complete

range of potential applications for this technology.

Efficacy: While the preliminary results of the

bandage are promising, more research is

needed to determine the long-term efficacy

of the technology and how it compares

to other treatments for chronic wounds.

Overall, the electrotherapy bandage represents

an exciting advancement in the field of wound

healing, but more research is needed to fully understand

its potential benefits and limitations

JUNE 2023 ISSUE 18.1

53

-The Editorial Board

CREDENZ.IN


Human Cloning

Ethical and Scientific Implications

Unraveling the techie

With Mr. Vikrant Agarwal

Philomath

As I was browsing Instagram reels, I came across

an intriguing video that claimed 90% of parents

at some point in their lives wish for their children

to be their exact replicas. Although I cannot

verify the accuracy of this statistic, it did make

me contemplate the topic of human cloning.

In essence, human cloning is the process of

creating an identical copy of a human being.

While it is an active area of research, human

cloning is not being practiced anywhere in

the world as of 2023. There are two primary

methods being explored: somatic-cell nuclear

transfer and pluripotent stem cell induction.

Somatic-cell nuclear transfer begins by removing

the chromosomes from an egg to create an

enucleated egg. Then, the chromosomes are

replaced with a nucleus extracted from a somatic

(body) cell of the individual or embryo to be

cloned. Both of the methods used to make liveborn

mammalian clones require implantation

of an embryo in a uterus, followed by a normal

period of gestation and birth. However, the

genes will be taken from only one parent.

Despite the promise of the technology behind

human cloning, many ethical, social, and

scientific concerns remain unresolved. One

such issue is the uncertainty about whether

a clone would share the same thoughts,

ideologies, and personality traits as the original.

Additionally, the question of personal

identity, autonomy, and individuality arises.

Human cloning could potentially be used to create

organs and tissues for transplant, thereby solving

the current organ shortage crisis. It could also be

seen as a potential cure for cancer since cloning

replaces cells and creates newer, younger cells of the

same DNA and components. However, the cloned

animals we have observed experienced health

degradation such as immune system disorders,

respiratory problems, and cardiovascular diseases.

As a result, it is imperative to ensure that human

cloning does not result in similar health issues.

JUNE 2023 ISSUE 18.1

54

Furthermore, human cloning could exacerbate

existing social and economic disparities by

creating a genetically superior race accessible

only to the wealthy. This would only serve

to perpetuate inequality and discrimination.

In conclusion, human cloning is a multifaceted

and controversial issue that demands careful

consideration of its pros and cons. While the

potential medical advancements of human

cloning are promising, it is vital to consider the

ethical, social, and scientific implications of the

technology. As the technology behind human

cloning develops, it is critical that we engage in

open and honest discussions about its implications

for society and the future of humanity. Ultimately,

we must ensure that the benefits of human cloning

do not come at the expense of fundamental

human values such as dignity and equality.

-Varad Kulkarni

Pune Institute of Computer Technology

CREDENZ.IN


Unraveling PISB Office Bearers the techie

With Mr. Vikrant 2022-2023 Agarwal

Chairperson:

Vice-Chairperson:

Treasurer :

Vice-Treasurer:

Secretary:

Joint Secretary:

Secretary of Finance :

Atharva Naphade

Nidhi Yadav

Nandini Patil

Kunal Jaipuria

Tanvi Mane

Megha Dandapat

Karan Lakhwani

Harsh Bhatt

Sejal Jadhav

P.I.N.G. Team:

Samir Hendre

Sangeeta Singh

Shreyas Chandolkar

Aditi Bankar

Vibhav Sahasrabudhe

Aman Upganlawar

Aaryan Giradkar

Soham Kulkarni

Gayatri Sawant

Hitesh Sonawane

Riddhi Kulkarni

Riya Wani

Shatakshi Chaudhari

Varad Kulkarni

Joint Secretary of

Finance:

Public Relation Officer:

Purvi Toshniwal

Manas Sewatkar

Madhur Mundada

Aryan Mahajan

Web Master:

Marketing Head:

Vansh Tappalwar

Yash Mali

Aryan Mahajan

Aditi Mulay

Design Head:

Chinmay Surve

Tech Head:

Devraj Shetake

Shreyas Chandolkar

Sarvesh Varade

Tanmay Thanvi

P.I.N.G. Head:

Anupam Patil

Neil Deshpande

JUNE 2023 ISSUE 18.1

55

CREDENZ.IN




Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!