Skip navigation

So here we are at the end of the semester. We’ve studied social and new media from a lot of different angles, and seen lots of applications for it. We’ve discussed the morality, the good, the bad, the silly, the sad, and all other opinions. So the question remains: how should we use it?

Like any good question, I don’t believe there’s one answer here. You can’t just say that it should be used for good; sometimes social media is meant to just be a silly, lighthearted means of communication. But in any case, I’ll try to give some of my more prevalent thoughts on how I think it should be used.

Obviously, sometimes social media doesn’t need to have a point or be for some kind of gain. But that doesn’t mean you should do that constantly. I’ve unfollowed several people on Twitter when it was obvious they were just tweeting constantly to improve their reputation in a company, or to look like a “social media goddess”, or whatever fancy title they wanted to have. There are lots of people who just keep spamming retweets and their company updates the entire time, and all I can really think is “who is paying that much attention anymore? does anyone outside your office building really care?”

That moves onto another point: the question “does anyone else outside X or Y group care about what you’re saying” as a judgement value for social media doesn’t really apply either. Every outlet of social media is meant to serve a different purpose, and within those outlets, people will find new, personal ways to use it. If you’re on Facebook posting about your personal life, for example, then it makes no sense for someone to yell at you saying “who cares about your personal life? shut up already!”….that’s kind of what Facebook was made for in the first place. However, I do think that there is such thing as too much of a good thing here. Again, there are people I see posting to Twitter once every 5 minutes on average. I’m sorry, but even on my most exciting of days, I do not have THAT much to say to the general public. Maybe it’s just me personally, but I still kinda feel that social ettiquette of “you shouldn’t dominate the conversation”, even online. If you keep chatting and chatting just to enjoy hearing the sound of yourself typing, I consider that a form of rudeness, and it dilutes your overall messages in the future. People aren’t paying as much attention to you, because they’ll see you as just chattering on like usual.

Finally, I see far too many people using social media to isolate themselves. One of the most fantastic things about these technologies is that we can communicate with so many people around the world, and find a group of people who shares our interests…but i see a LOT of people, on all sorts of networks, using social media to simply find their niche groups and never leave, and use those to avoid interacting with those around them. They get so comfortable in these groups, they forget to talk with people outside those bubbles, people with different viewpoints. It gets so bad that when these people do talk to others in the real world, and meet a difference in opinion, they aren’t able to handle it like they once could; they’re so incredulous that someone doesn’t share their view, and their world is shattered. Now, finding your people and your own group to be a part of is great, don’t get me wrong, but don’t forget how you came to have these interests in the first place: people showed them to you, when you had never heard of them before. One shouldn’t get so comfortable in his niches that he forgets to expose himself to others.

And wordpress is now telling me I’ve hit 650 words, so I should wrap it up now or else I’m breaking that second bit of advice I just gave up there by rambling on too much. What do you all think? Do you have your own ettiquettes you try to abide by online? Do you have your own opinion on how social media is best used?

Apparently, there is a woman in Oregon who is testing what it’s like to talk to people ONLY using social media (Twitter, Facebook, Skype, etc) for a month (article is here). She lives in a storefront so everyone can see her, and has vowed to communicate solely by social media until December 1st.

The article then goes on to talk about how the writer gets frustrated because he can’t reach most people by phone call or office visits anymore, but instead can get instant Twitter and text message responses. In my opinion, he really seems to be whining a little bit. Being upset because people can shoot off an email faster than they can hunt you down, hope you’re in your office, then walk back to their own seems kinda selfish to me. An email isn’t that great an interruption on someone’s time, and they can easily transition back into what they’re doing. For me to go visit someone, it involves me changing location, on the way there my mind will wander, I’ll have that conversation, prolly get sidetracked by random conversation and waste time, then get back to what I was doing. By then, my brain is somewhere else entirely. For the sake of my own productivity, I’d prefer to just send an email. It’s not that I don’t care about you, it’s that I’m worried I’ll never get anything done if I go see everyone for everything.

It may be sad, but it’s true: Time really is becoming one of the most valuable gifts you can give to someone. For me, in-person conversation is a luxury item now, and it’s something to be enjoyed. If I see someone in person, I want to stay a while, I want to chat. Sadly, I don’t always have the time to do this as a graduate student taking 12 hours, doing the homework for those courses, learning a language, hosting a podcast, and having a job.

Back to what the woman is doing. The main reason she’s doing this is because she saw the effect that communicating by social media was having. She was simultaneously cutting off part of herself from people while still communicating with them, and wanted to explore that more. I agree with her completely that yes, even Skype is missing something in talking to people. My sarcasm is often lost on instant message, and Twitter is lacking in enough space to make many good jokes. People also get an entirely different view of me based on what social network they see me on: on Twitter, I have no idea what people think of me. Prolly random and scatterbrained. On Facebook, people see more of the real me; they definitely get more comedy and awesome shared links from The Onion, etc.

Social media can be frustrating to me, even while it gives me such great opportunities. When I talk to my significant other on Skype, I want to be able to hug them – to be there physically even more. Years ago, I’d be lucky to see their face. I heard a story that Bill Gates and his wife while they were dating long distance would each go see the same movie locally, then get on a pay phone and talk to each other about the movie. That was a date. Now, for my dates, we get on Skype, start a movie at the same time, and watch it together while talking on Skype. It’s the closest we can get to a cute night in. Trust me, I am no where near whining about getting the opportunity to  talk to him, but perhaps I really am a little spoiled to still want more. Even today, many people can’t do something like that. I’m thrilled just to see him, but when I see him, I still wish I could be there in person. So much communication is lost.

So where do I stand on what this woman’s point is? I love online communication, I do. I’ve been using it since 6th grade, and in a lot of cases I prefer it. I’m pretty shy, so talking online gives me a bit of relief from that anxiety. I see its use in terms of productivity, and it did raise my typing speed a hell of a lot 😉 But some conversations just need that personal touch. Having time be that much more valuable in some cases can be a good thing: when I talk to my sig. other online, he knows exactly how much he means to me because I take the time out almost every night to talk to him for an hour or two face to face. It’s made me value my time with him that much more, and reminds me of how much he means to me that I look forward to just talking to him digitally. Even though I want to see him in person a lot more than I get to, I wouldn’t dare leave my social media behind or call it bad, because it’s better than not having it at all.

 

So last week in EMAC 6300, we began discussing some of the political and legal issues revolving around new media; specifically, where can we draw the lines between original work and stolen work? Is there such as thing as intellectual property anymore? What sort of protection should be given to data under our current legal and digital systems today?

I for one do believe that intellectual property should be protected by law. Simply because we have gone digital does not mean you can just take something created by someone else and call it your own. That is still theft. Moreover, it’s how people make money. When you just steal their work and call it your own, you are potentially diverting profits from them, and can thus be stunting innovation. But I digress; that’s a rant for another time.

I began web design at age 12, so from a young age, I was familiar with using materials created by others was similar to how we were told to do it in school: in lots of cases, if you just gave credit to the original designers or asked them if you could use their material, you were safe. At the very least, you shouldn’t go taking stuff from big name corporations and pasting it on your own site, especially not without a credit. You could also avoid this by simply creating the content yourself, or using materials from a “stock image” or “open source” site. However, as time grew on, lines began to blur, and more and more boundaries were being pushed in terms of what was acceptable. Copyright law was growing more and more inadequate to fully cover all areas of digital property. Thankfully, Lawrence Lessig came into help all that.

Lessig created something called “Creative Commons“, which was meant to help bridge the gap between copyright and public domain. Having a CC license on your work allows you to put something kind of like copyright on your work, except in this case, you define for yourself how you will let other people use it. With copyright, all rights are reserved to you. But what if you want to publish something and allow others to use it for themselves? Like if a band puts out a song, and they want to gather some free publicity by making it legal to use so long as you credit back to them? CC lets you do just that.

It’s very easy to get a CC license. You simply file for one online, specify how much protection you would like to put on your work, and then mark your work or publish it with the CC information (They give you images and links and other details showing how to mark your work). I myself published something under CC a year ago: I created an original tabletop Role-Playing system for a class, and filed it with a Creative Commons license, saying people were free to edit it, so long as they credited it back to me. This way, people could expand upon the system and make up some of their own rules, so long as they made sure to say that they didn’t come up with the idea themselves. I like having some recognition for my weeks’ worth of effort, after all.

If you already know about CC, all of this may seem pretty obvious to you. But the main reason I wrote about it is because it astounds me how many people have never heard of it, especially when they’re in positions that they really should be using it. I have a close friend who went to a very fancy-schmancy private liberal arts college for undergrad, and at one point, I suggested he search through CC for some papers on the topic he was researching. The response? “What’s Creative Commons?” So I explained, and he then said “You have to understand, Mary. At my college, we are in a FAR different world than that of you geeks up at UTD.” I tried to explain that really, this isn’t just a geek thing, this was a creative, collaborative thing, and liberal arts programs should be getting in on it. I’m not quite sure he believed me. On another occassion, I was working with a student media organization on campus designing their website, and was questioned where I got the images for the design. I explained I went to sxc.hu, a large and well-known stock photo site. They were VERY wary of it, and were asking “Why can’t you just take the pictures yourself?” “I don’t have the resources…” “Well, we don’t want to steal content here, you should know that! This is so unprofessional!” “I’m not stealing; stock photos are open source.” “What’s open source?” This was a student media group at UTD, the aforementioned “geek” school. Not even the geek school is all up to date on this stuff!

In the end, I don’t think CC alone will ever fully replace copyright. It doesn’t cover all that needs to be covered legally either, and sometimes, people will just want a full on copyright. However, I find it an amazing tool for people like me who aren’t trying to amass tons and tons of money from something, but just want to share with some recognition. Basically, it helps to eliminate outright theft, but we still need a more overarching system in place to handle payment issues and the like. I really hope that more and more smaller, independent publishers will get on board with it and start to utilize it’s power soon. Who knows? Maybe once enough smaller people get in on it, some of the big names will come around too.

Kyle Kondas recently retweeted an article about some of the trends that social media experts think will shape media over the coming years. Some of their points I couldn’t agree more with, but some I have a hard time agreeing with, or at least think they overinflated some. Here’s some of the trends they listed:

“Mobile will become the primary point of access and on-ramp”: Absolutely. I know I for one only bring my laptop to campus if I need it, because I don’t like carrying that much weight on me. As technology grows ever-present, and it becomes cheaper and more efficient to have people able to work on-the-go as opposed to chained at a desk, mobility will become even more important.

“Social Search”: Not really my thing. I can see it being useful in the future if it would be able to tell that I was searching for movies, etc, but search algorithms need to be focused on first, not just adding Twitter and Facebook to them. Also, as per my post last week, as Twitter and Facebook become used more and more for things other than friends, people won’t necessarily want a generic script telling them that some of their Facebook contacts like a movie or band.

“Depth of Usage becomes more important than volume for Digital Media Publishers”: Slightly. Let’s take eReaders as an example. I, and several others like me, chose the Kindle because it has the best volume of books available as opposed to others. Also, the eInk screen is nice, easy to read, and doesn’t hurt like hell over time. Now look at Apple’s iPad. When it was released, it was marketed heavily as an eReader, and they made a new basic app for it, iBooks. No one I know with an iPad uses iBooks. Why? Because it hurts like hell to read and the library is slim at this point. These people also own a Nook or a Kindle or some other such device. Brand loyalty is not enough to sell a product…your product still has to be useful first. Also, one way to create depth of usage is to have a large volume of products. If you don’t have a lot of products, your micropayment system doesn’t go very far, does it?

“News makers get professional: they have the tools, reason, and money to set the agenda on their own”: Ok, I’m actually kinda confused on this one. I think with that part, he would mean the casual news bloggers, not major outlets. But then he says they can “communicate directly to the masses”, and I don’t know which group he’s talking about anymore. In general, all publishers of the news are going to have to get more mobile with their news if they want to survive. The pace of our society is picking up to to the point where we can’t afford or even want to sit and watch an hour long news program every night. People want what’s important to them, and they want it at their fingertips when they want to read it, not when a network has decided is a good time for people (which is apparently 6, 8, and 10).

“Mobile Web and Mobile Apps”: Just see above under “mobile”. Also, news organizations should look into both of these things.

“It’s the experience….stupid. Fantastic design with emotional connection will be the differentiator”: ABSO-FREAKIN-LUTELY. As a designer myself, I know the power a good design can have. Think about it: how many Twitter apps are there for the iPhone? What makes people choose one over the other? Hint, it’s not whether or not Twitter themselves created the app. It’s the one people feel has the better UI. This goes for lots of those apps that all have a similar purpose: people will choose the one with a better design and easier use over all the others. If people don’t like the experience of using your product, it will take a LOT for them to use it. And as more and more developers are able to design those products that most consider essential…big name companies better step up their design game if they wanna remain big name companies.

So my blog post this week is admittedly a little late, but I’m kinda glad for that, because today an article was posted on Mashable about how Facebook is teaming up with search engine Bing. With this partnership, Bing search results will now include things that your friends have “liked” on Facebook, where applicable. For example: if you’re searching for local movies, and a lot of your friends liked “The Social Network”, then your search results will have a little box saying “___ of your friends liked this movie!”

As we just covered in class, around 70% of consumers trust peer reviews of products over any other form of review. My only problem is I don’t see a huge point to this. If I care enough about Facebook integrating with Bing (which, admittedly, I don’t use), it prolly means I check Facebook often enough to know when my friends like something or post about it. When I search for “local movies” or “Iron Man”, I’m prolly searching for more specific data, like movie times or what year the movie was produced to settle a debate, etc etc. If I want to know what my friends recommend, I’ll go to Facebook. I don’t really want my search results cluttered with my friends opinions on it.

Moreover, I have several people on my Facebook who are clients, professional acquaintances, and the like. I don’t really agree with their opinion, and honestly, in some cases I don’t really care to hear it. I don’t want those popping up on my search results, because I just don’t care. I know Bing has this huge thing about “personalizing the search” and “making the search more human”, but I guess I just use my search differently. What about other people? Does this seem useful to you?

This week, I was searching on mashable.com, and amidst all the lame articles about how Facebook and FourSquare had huge amounts of downtime (yes, people were indeed forced to communicate IRL for upwards of 14 hours – the horror), they had an article about 5 projects on YouTube that are trying to do some kind of social good. I kinda have a soft spot for social service projects – I especially like working with FoodShare organizations; the thought of having to go without even the basics of food is just a horrifying thought to me. I don’t even know what it’s like to be that hungry, I just know that even with my weak level of hunger I have to put up with, it makes me not wanna do anything and just feels miserable all over. I can’t imagine that being my entire life. Anyway, enough rambling. Seeing these projects got me thinking: how can we use social media to effect the lives of people around us? I mean, I want to focus on using new media to teach language, but it’s incredible to see how people are using it to make an immediate, tangible change in their own neighborhood.

My favorite project would have to be the last one featured on the list, the Uncultured Project. It was based off of the idea that if you can get a blog about Britney Spears to become one of the most popular things on the internet, why couldn’t you make a blog about something as dire and important as global poverty and get THAT to be prevalent to the public? The person who started it began the project decided one day that he wanted to do something that he knew would make a difference in the world, so he packed up everything and moved to Bangladesh, and just began helping the people around him: using the money that he was saving for an Xbox to help fund schools, etc etc. He then began posting a series of YouTube videos showing how things truly were there, and would ask the people what they needed, and they would tell him “blankets, pencils, books”, and he would post that to YouTube. Thousands have responded with donations of money and items to help out the people of Bangladesh.

Seeing a project like this makes me wonder what we could do just here at UT Dallas, or for those not affiliated with the University, how we could use social media to impact the things around us. I honestly don’t believe my calling is to pack up and go to Bangladesh, but surely with this technology, we can think of a better use for it some of the time that Twittering about the ham sandwich we had at Starbucks. We always talk about the incredible boons in communication, the networking, the publicity, the voice it gives to the people…why don’t more people then use it for the social good? I know there are many people who do, but it seems like if we do so much studying in EMAC towards the applications and reasoning behind these new medias, why don’t we pay more attention to what we can do to impact the world around us? And no, I don’t have any new, groundbreaking ideas to report yet, outside of my idea to teach language. But I’m definitely going to think on it and see where I can get.

So in getting updates from a blog that updates on my mir:ror, Digital Media Thoughts, one of the stories was about how IE9 has shot itself in the foot before it even step foot out the door (article here). Yes, it will be HTML5 and CSS3 compliant, or so we’ve been told. The beta version has been out for 2 weeks now, and it’s functionality is quite nice; kinda what you’d expect from a web browser, so the fact that Microsoft is just now getting it out the door is sad. But here’s the problem:

“IE9 will not run on Windows XP, has problems on Vista and now we learn that it will only run on Windows 7 with SP1 installed.”

That’s the problem. You MUST use Win7 or else you can’t use IE9. I still have several friends who are just as much of technerds as me that use WinXP. I have some friends (god bless them) still on Vista. I have Win7, but really? That’s only because I can get it at UTD for 32 dollars. For the price of $150, about the normal cost to get Win7, I prolly wouldn’t have it. I’m just not that quick to jump on a bandwagon that costs me about half a paycheck, and I’m certainly not about to do it for an internet browser when I can download Firefox and Chrome for free.

Question for this week: Do any of you still use IE? I know many do, as I work in the computer labs on campus, and see people use it all the time, despite our having Firefox available. Why do you still use it? Is it due to the effort it would take to download and learn a new browser? Doesn’t it ever annoy you too much when IE won’t display some things right? I know in my web design, I’ve had managers come tell me I have to completely redesign a site, only because IE won’t run it right. And IE8 runs it fine, but “Compatibility Mode” then messes it up. So I develop  2 websites in effect: One for the special IE people, and one for the rest of the market share. It’s gone so far that a while back, the German government officially announced they were not supporting IE, and encouraged the whole of the country to switch to a different browser. Yes. An ENTIRE NATION has declared that they hate IE, and won’t use it for official work. What would it take to make you switch from IE? Would you buy Win7 solely to avoid changing operating systems?

My presentation over this weeks readings (“The Medium is the Massage” and “The Medium is the Message” by Marshall McLuhan) can be found here. Don’t worry, it can be played within that browser window, and should be visible to everyone. If there’s any problems, comment here and let me know.

For the first bit of the presentation, I showed my mir:ror, which is made by a French company called Violet. It’s an RFID reader. They make some other cool things, including the Nabaztag, a wi-fi reader bunny rabbit that I have owned for 4 1/2 years now. I love it, but my friends find it weird. As for why they’re bunny rabbits? I don’t know, but I love rabbits, so doesn’t bother me 😛

If there’s any questions, or any further discussion people wanted to have about the readings, feel free to comment!

I was doing some research for Approaches to EMAC today (note: my topic is researching how a specific type of new media can influence a specific type of language performance…those specifics have yet to be determined, hence the research ^_^), and found a line from the Soentgens article I was reading:

“[students] do not automatically become autonomous simply by being placed in a self-study environment”.
-Soentgens, 1999

This line stood out to me for a number of reasons, but the main one is that it’s long been a theory that if you simply throw someone in an immersion environment, like, sending groups of kids to Mexico for a month, they will be forced to pick up on things, adapt to that culture, and learn the language. However, there are evidences that this is not true. I’ve heard some personal anecdotes from those who went to Japan to teach English for a year, and came back knowing no more Japanese than they did before. They created their own bubble around them, and refused to gain any more Japanese than was necessary to survive (some even simply grunted at cashiers and the like). In Soentgens’s study, some students simply didn’t do the assignment, despite being given a partner to help, the tools to do so, and instruction on how to do it properly. It reminded me a lot of the old saying “You can lead a horse to water, but you can’t make him drink”.

This got me thinking about new media in our society. In some EMAC classes I sat in on before applying, I heard several of the students thinking that older adults will simply pick up on new media because they’ll figure out it’s spreading and becoming more popular. But is that really true? Yeah, moms age 40-60 are rapidly becoming the most common Facebook group, but they’re not using it how most think Facebook is used: they’re only playing the mini-games on there. I don’t often see older adults using Twitter, and iPhones are still most popular amongst the younger demographic. While several of you are probably thinking “Yeah, sure, but that’s all newer tech!”, think again. How many older people do you know who use IM? Message boards? Those have been around since the late 70s with BBS (anyone else visit the alt. BBS groups? anyone? just me? :P) Working with tech support, I see most students still struggling with e-mail; that’s definitely nothing new.

The thought for this time: Do you think people will just pick up on new media simply by being in the environment of it? Note: This is different from the lingo of new media: most people know what Twitter kinda is, but no one uses it. How long do you think it will take for our current forms of new media to truly become mainstream, and what do you think will bring that about?

Hopefully the last in a series on rants, but it was something that had been bothering me from my classes. When telling my friends stories from the EMAC classes, as always, you kinda focus on those that are a little weird or the negative stories, as the stories of things going smoothly are never the interesting ones. My first day in Approaches to EMAC, several people were shouting how they hated Facebook: “too much noise!” “too many people!” “now your MOM’S on Facebook!!”…when I asked if any of this was a problem, they never answered me.

They later began to complain about privacy settings on Facebook; namely, the fact that they couldn’t control who saw what information, the fact that employers might be looking at it, etc. I told them then that they could just change their privacy settings. There are even tools online available to help you make sure outside servers can’t see much of your Facebook info. Their response? “No one can figure out those things! There’s just too much! It’s too hard!” Now, the response in my head was “Well, evidently your mom can figure it out, so what does that say about you…”, but I refrained.

A few minutes later, someone complained of not being able to get on eLearning because they forgot their netID. I told them I could assist them in looking it up, because I work for the university’s IR department, and part of my job is to help users with their account information. I got a chorus of “WOW THAT’S CREEPY!!”, and I muttered to myself that perhaps if people would keep track of their privacy settings as I mentioned before, perhaps bothered to remember their login for something like the University’s systems, then people like me wouldn’t need to be given access to your information.

I guess my question/musing this time around is: at what point do privacy settings really become too labyrinthine to make sense of and reasonably manage? Facebook’s settings do indeed cover several pages, but they’re worded as such that even your mom can truly manage it herself, and figure out what’s being said. Sure, it may be weird to think that a student can look up your University account information, but trust me, I’ve signed several things saying I can be federally prosecuted for misusing such information. Do you trust the people on the internet with access to your information with that responsibility? For those who don’t, why do you then not keep track of your information enough to make people like me obsolete? Are we really being creepy? Or are most people simply being irresponsible? We would think a person forgetful and irresponsible for forgetting their SSN after telling it to a corporation…so why is it not seen as equally irresponsible to constantly forget your passwords?