In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
intoThing 2019: living with bot
Things with a digitally networked dimension are becoming increasingly autonomous, thinking and acting with them. What does this mean for us humans and the thing in our everyday life worlds? Things no longer only indicate status and mode, but convey intentions. Design Challenge / Research question: Can objects relate to our behavior patterns? We question the motivation for interaction and relationship building. The aim will be to reflect speculative future scenarios >> living with a bot - designing an emotional relationship
1.1 Speculative Design
Speculative Design sits under Conceptual Design and has lots of friends.Such as: critical design, design fiction, design futures, antidesign, radical design, interrogative design…Broadly, they are all united by the principle of using the language of design to pose questions, provoke, and inspire. The boundaries between Critical and Speculative Design are not clear (to me at least). Differences between affirmative design (a) and critical/speculative design (b)If speculative design focuses on science and the potential future applications of applied technology, then critical design focuses on present social, cultural, and ethical implications of design objects and practice. (Malpass, 2013) The essence of SCD is not to predict the future, but to present a certain future possibility through design. After inviting users to make value judgments, they will improve the „now“ by action. From Simon's general design theory, any design has a utopian nature in nature, because it always aims to transform the present with reference to the future. SCD takes the future as the present, and looks at the real world in parallel with the world, calmly rethinking the blind spots of technology in the mainstream ideology of technological optimism.
1.2 AI / Bot Context
a. Relationships between human and AI:
Al-Human: How can the bot/ Al create a more emotional relationship with human? How can the bot/ Al support and reaction in emotional and irrational moments of the human? Human-Al-Self: How can the bot/ Al make human to develop a deeper relationship to herself, being more conscious and self-reflective of her own decision making, emotional state, sense of belonging? Human-Al-Human: How can the bot/ Al enhance human to human interaction? And our AI acts mainly on Human-Al-Human.
To go beyond the cliches of Al and create consequential, fulfilling, and ethically sound new interactions, designers need to craft a refined aesthetic for the design of the character that Al system need. It defines the behaviors, sociality and narratives of the Al in interaction to human that are not perfect and rational, but often emotional and irrational. Our AI would have mild personality and less autonomy- its control belongs to the user to avoid excessive dependence and unnecessary potential troubles. Our AI would be humorous, positive, quiet and considerate.
Designer needs to create AI/bot that meets social and ethical responsibility, that empowers human and expand human’s capacities and capabilities in oder to heighten human-AI collaboration.
In the 1960s, according to the diversified market demand in the post-industrial society, many designers began to consider the direction of design from different angles, so the design ideas came into being, such as rationalism, deconstruction, the green design and others; However, some designers attached great importance to the vulnerable groups in the society and it was propose that the design should provide convenience for the disabled and the elderly. It was the origin of thought for barrier-free design. In terms of products design, the barrier-free design had a wide range of applications, from the big cars to the small paper clips. In the design of barrier-free products, “how to design to increase the convenience and safety for usage” became the major concern, which reflected the care for the disabled and other vulnerable groups.
A Role-Play is a type of prototyping or simulation technique that can help in quickly eliciting the user experience for a product or service from the target audience. A Role-Play, just like prototyping can be used as a way to gather data, tweak and re role-play to gather more data from the activity. The participants in this method of research essentially play certain roles in a skit or a conversation. Depending on the expected nature of the exchange or intended data to be gathered, a few participants are given the script in advance and a few are asked to either play themselves or specific roles based on instructions. The different scripts that the participants play out can be designed, as different scenarios where the participants are immersed in those scenarios to understand how each one would react in specific situations. After the research, we did role play to experience how blind people communicating with the world, Designers exchange roles to play blind and normal people, and carry out some daily activities, such as going to the toilet, reaching a destination, smoking outside the door, talking to others, and other simple activities. And the designer also played the role of an AI, through this play, we understand that the ability of AI is not wireless, but has great limitations. Find ways to make AI help people in this limitation.
Interviews are most effective for qualitative research: They help you explain, better understand, and explore research subjects' opinions, behavior, experiences, phenomenon, etc. Interview questions are usually open-ended questions so that in-depth information will be collected. So, we came to the BBS for blind communication in halle in May. Observe the communication and behavior of the blind. During this period, we found six groups of blind people and conducted targeted interviews. They were of all ages and genders. Such interview crowd is more scientific and rigorous. As for the interview content, we hope to learn more about the real life of blind people and the understanding of blind people's products.
At present, the products for the blind on the market probably include walking products for the blind, daily products for the blind, entertainment products for the blind and so on. Pay attention to the psychological characteristics, special needs and own characteristics of the blind group and combine product semantics, ergonomics, materials and colors into the products. At present, although the market demand for blind products is large, the types and shapes of the products are relatively single, and the concept of design is still relatively traditional. Some new and unique design of the product, the price is also relatively high.
FaceOSC will track a face and send its pose and gesture data over OSC, as well as the raw tracked points (when selected in the GUI). It will also stream the entire image over Syphon (Mac only) as „FaceOSC Camera“ when selected. See this demo for more info. FaceOSC is developed with openFrameworks and ofxFaceTracker, built on top of Jason Saragih's FaceTracker. The windows build of FaceOSC was prepared by Dan Moore on the CreativeInquiry fork of ofxFaceTracker.
OpenPose represents the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images. By recognizing the position of the limb, the meaning and emotion of the other party’s body language can be judged according to the basic rules of the body language.
The most commonly and casually cited study on the relative importance of verbal and nonverbal messages in personal communication is one by Prof. Albert Mehrabian of the University of California in Los Angeles. In the 1970s, his studies suggested that we overwhelmingly deduce our feelings, attitudes, and beliefs about what someone says not by the actual words spoken, but by the speaker’s body language and tone of voice.
In fact, Prof. Mehrabian quantified this tendency: words, tone of voice, and body language respectively account for 7%, 38%, and 55% of personal communication.
‘The non-verbal elements are particularly important for communicating feelings and attitude, especially when they are incongruent: if words and body language disagree, one tends to believe the body language.
Our goal was to create an AI assistant which could help blind people getting instant information which they can't see while communicating with others.
The user flow of the device will be as following: 1) Recognize the body language of the person who is communicated to; 2) Give information to the blind person, when there’re body language with emotional color; 3) When the situation is negative, the AI will change the form and remind the blind people; 4) The user will decide to accept or refuse further information; 5) After rejecting the suggestion, the AI stops dealing with this topic.
The colorful world is very attractive to the blind people . But Lack of eyesight makes it difficult for them to get accurate informations during communication with others. We tried to create an AI concept to help.
Our AI’s inspiration is from Greek mythology— Laocoon—who dared to tell the truth and break the lie but also died for it.
Laocoon accompanies the blind people during the face to face communication.He recognizes and analyzes the facial expressions and the body language of the opposite people.
When necessary, for example, the body language has a strong emotional color, like happy/angry/ fear/ hesitate; or what’s being said is not the same as what the body conveys. Laocoon will describe the situation or give some advices near blind people ‘s ear so that the blind people could make correcter judgment. But the Behaviors of human in interaction to AI are not always perfect or rational, but often emotional and irrational. Maybe in some situations, the blind people don’t want to hear about it. In another word, the reality is too hard for them. Then they can give feedback to Laocoon that “come on~ I don’t want to hear about it.
Observer detects general informations of the surrounding, which the blind can not see.
Admonitor conducts real-time remind and advise to make the blind be aware of and avoid the potential problem when it may occur.
Interrupter gives the blind a choice of whether to continue listening to the AI‘ s description.
The final product is an AI glasses. It exists as a form of glasses and represents the second pair of eyes of the blind.
There is a camera in the middle of glasses, it can get the vision which is more like from the real eyes. And it can capture the information of the outside world (body language, etc.), which is a very important information collection portal.
On the left side of the glasses, near the human ear, is a circular device. This is actually an ai microphone. When the camera captures the information and analyzes the results, if the ai wants to tell the blind something, the device rotates to the human ear and starts a voice conversation with the blind. Meanwhile the rotating part has two meanings. First, the information translation. Second, it can tell the people that “I am looking at you, I know what you are doing.”
There is a button on the right side of the glasses. This is the ai controller. When ai constantly provides blind people with a large amount of information, blind people have the right to make their own judgments and choices on excessive information. If the blind feel they no longer want to receive ai messages, they can tap the control button.By touching the button, the blind people can give a feedback to AI that they do not want to hear about it, then AI turn back and stop to speak.
Umsetzung des Prototyps
Produced prototype by Arduino, the camera translate the information respectively the body language besides the facial expression to AI, the AI give a signal about the information to Arduino, Arduino controls the servo-motor besides the speaker rotation to give the information at the same time the suggestion to blind people. If the blind people do not want to hear about it, they can knock the glasses, the vibration sensor translate the feedback-signal to Arduino, that controls the servo-motor turn back and stop speaking
In a narrow street, a blind wearing the AI glasses encounters with a stranger. The stranger makes way friendly, and gives the blind the priority to pass by. The behavior of the stranger is detected by the AI, and his amity is conveyed to the blind, who then thanks the stranger consequently.
A blind wearing the AI glasses is shopping, and tells the salesman that he wants to buy something attractive in price and quality. However, the salesman recommends something which is not cost-effective deliberately. After telling the fact by the AI, the blind refuses the salesman’s recommendation.
A blind wearing the AI glasses comes across his friend in the street, and is invited to join a party in the weekend. However, the companion of his friend express his dissatisfaction and refusal by using body language. Though the blind is told the fact by the AI, he insists to join the part, and has been treated with indifference in the weekend.