Twitter in the Enterprise – Round 56745327

In the last minute I had to cancel my trip to the SAP Influencer Summit, but I am following it almost as if I was there – by following the Tweet Stream.  SAP has also provided a Virtual Environment, where analysts, media, bloggers can interactively participate – right now I am watching a live video on their On Demand Strategy (hm.. how appropriate – watching the On-Demand session on-demand).  The Virtual Environment includes Twitter tools, including sentiment analysis based on SAP’s Business Objects technology:

(Cross-posted @ CloudAve )


Scoble is Wrong When He Says He is Wrong:-) Full Feeds Still Rock

Wow, I’m sensing another TechMeme Storm rising (and a certain analyst would call it a circle j***, but that’s another matter). Robert Scoble says he was wrong when he said In 2006 he wouldn’t use any news aggregator or feeds that aren’t full text.

I think the Scobleizer is wrong now that he says he was wrong. 🙂

His key argument is that his reading habits changed, he relies a lot more on Twitter, which is short form, uses the iPhone which is not that convenient for lengthy text, and Google Reader has become bloated and slow.

All true. But let the user / reader chose: even sadly slow and bloated Google Reader offers the choice of reading full text or scanning just the headlines.  It’s a simple switch, there is no need to cut off the source.  I don’t read all my feeds A to Z, like Robert, I do a lot of quick scanning.  But I find it extremely frustrating to have to click through to a site, sometimes wait looong (we’re all guilty of having too many widgets and plugins that slow down page load) only to find out it wasn’t worth the wait.  So I tend to skip partial feeds, and guess what happens to less read items?  They get dropped from Google Reader…

From the content author’s point of view, I understand the need to bring traffic to ad-supported sites, and that’s about the only exception when providing partial feed makes sense (but even than, please remember to send enough to entice me to click through).  But for many others, perhaps for the majority of blogs I follow: it’s a distributed world.  If you want your views to matter, you need to be heard  / read via whatever distribution channel you can reach, and that means providing full feed.

3 weeks ago I switched another group aggregation blog that I am editing, the Enterprise Irregulars to WordPress and along with that finally was able to offer full feed.  Our feed subscriber base doubled and on-site page views tripled. Yes, pageviews tripled despite the fact that we are “giving away” content.  Translation: we’ve became more visible, accessible, and it works.

(Cross-posted @ CloudAve )


Tweet Blender Wins Over Twitter’s Own List Widget – For Now

CloudAve readers can now follow the contributing bloggers’ twitter stream in a sidebar, thanks to a cool widget called Tweet Blender.   Finding it was not easy: I combed through at least 100 plugins / widgets, all doing essentially the same: follow a person, or do keyword search.  Either or.. not both.  And definitely not a selection of users.

Tweet Blender came to the rescue (before Twitter Lists): it allows to follow any combination of users and keyword searches. Smart!   But just days after I installed it along came Twitter Lists … so the writing for Blender was on the wall.

Not until Lists got supported in widgets though.. which is what we’re seeing today.  Twitter introduced their List Widget. I quickly replaced Tweet Blender with the new widget, if only for testing at Enterprise Irregulars, another group blog I am editing, thinking it might help with a major problem I have with Twitter API limits.

Here’s the gist of the problem: Every time the widget refreshes, it eats into my API allocation – and it bites big: one API acces per user followed. Over at Enterprise Irregulars we have thirty or so authors on Twitter, so 5 refreshes and I am out of luck (and API).  But the author of Tweet Blender came up with a smart caching solution, turning all blog readers into API contributors:

As of this writing, Twitter allows only 150 connections per hour from a single IP address.
Since TweetBlender works in user’s browser, this means 150 connections from the user viewing the page on your site.
For each screen name in the list of sources there is one connection made. For hashtags and keywords, they all bunched into one search query and only 1 connection is made.
This means: if you have 30 screen names – every update makes 30 connections; if you have 30 hashtags – every refresh makes 1 connection. If you have 30 screen names AND 30 hashatags – every request makes 31 connection.
If you set TweetBlender to refresh every 10 seconds and you have 50 screen names in sources then after the 3rd refresh the user viewing the page would reach the connection limit – i.e. in 30 seconds they will be done and would have to wait for 59 minutes and 30 more seconds before fresh tweets become available.
The more screen names you have – the quicker the limit is reached.
To deal with it, caching is added. When user A gets fresh tweets in his browser they are sent to your server and stored there. When user B gets fresh tweets in his browser (against his own 150 limit) they are also updated on the server. All users that view your page keep the cache fresh.
Once user A reaches his limit TweetBlender switches to cached mode and instead of going directly to Twitter, starts getting tweets from your server. If user B is not yet at the limit then his updates will help user A see fresh content.
The more users view your page and the more evenly the traffic is spread out – the less chances of reaching the limit. All visitors to your site will keep cache up to date and help each other

An absolutely smart solution – but what if I don’t have the API problem at all?  This is what I expected to test with Twitter’s own solution.  But what disappointment…  If you look at Enterprise Irregulars, you probably see the tweet stream – I don’t.  All I see is a blank frame. Sam on Scoble’s blog.  Or Mashable. Or Brian Solis.

I’m out of Twitter API allocation (or so I assume – could not confirm yet).  But while Tweet Blender uses a cache, in fact a collaborative smart cache, Twitter’s own Widget just throws up.  Yuck.  Tweet Blender is the absolute winner.  For now.

I’m writing this post as a tribute to Kirill, Tweet Blender’s developer, also in recognition of his outstanding responsivenes. Read the Facebook threads – he investigates individual installations, comes up with bug fixes overnight – exemplary Customer Service from a one-person team.

But he has just become endangered species.  With gazillion $ in funding Twitter has the resources, and will no doubt come up with a solution to the API / caching problem.  But let’s not write the little guy off just yet:  his product still has more / better features… and I have no reason to believe he will sleep on his laurels. 🙂

Update: my assumptions just got confirmed:

(Cross-posted @ CloudAve )


USGS Now Embraces Twitter as Source of Earthquake Information

earthquake This time it was personal.  The earthquake hit three miles from my house.  It was a minor one, magnitude 3.7, but I felt it very strongly, albeit very shortly, too. Just a sudden kick in the butt, nothing more.  Perhaps that’s the difference between being right above the epicenter or feeling it remotely.

I jumped on Twitter, and I was among the first few to report the quake.  Within seconds there were dozens, then hundreds of reports.

Not that it was a surprise, we’ve seen Twitter become the primary initial news source be it earthquakes, fires, military coups…etc.  (For a while Google thought I was some  earthquake expert simply because I pointed out Twitter was the first to report quakes in Japan and China.)  But clearly, not all information on Twitter is reliable, as was the case of the fake LA earthquake video.

Wee need both speed and reliablity.  The first comes from the crowd – nothing can beat having millions of “reporters” on the field, wherever, whenever significant events happen.  But we typically do expect some form of verification, be it a traditional news agency, or in the case of earthquakes often USGS, the US Geological Survey.  Until recently the information flow was one-way.  But after yesterday’s quake I found an interesting link to the Google Maps mashup above. It’s created by @usgsted, the  USGS Twitter Earthquake Detector. Here’s the explanation:

In this exploratory effort, the USGS is developing a system that gathers real-time, earthquake-related messages from the social networking site Twitter and applies place, time, and quantity data to provide geo-located earthquake detection within 60 seconds of an event’s origin time. This approach also provides a central directory of short first-impression narratives and, potentially, photos from people at the hazard’s location.

Social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources.  People local to an event are able to publish information via these technologies within seconds of their occurrence. In contrast, depending on the location of the earthquake, scientific alerts can take between 2 to 20 minutes. By adopting and embracing these new technologies, the USGS potentially can augment its earthquake response products and the delivery of hazard information.

To be fair, the USGS has not been entirely deaf even before: once you locate the relevant quake info (which is quite an achievement in itself) there is a Did You Feel It? link where if you are really persistent, you can provide feedback.  The form is asking for a lot of data, takes a while to finish – enough to deter most.  Which is why the fact the USGS is now embracing Twitter is a major milestone: it combines the speed of crowdsourced reporting with the verification / authority of experts.

(Cross-posted @ CloudAve )


The Risk of Starting Your Tweet with @name

This is so obvious, yet little known – and although Mark Suster warned us all, I keep on falling in this trap.  Just today as I wanted to announce yet another great post by Mark, I tweeted this:

@msuster discusses how the Ice Age is thawing for Venture Capital

Big mistake.  Had I written “great discussion by @msuster”, a lot more people would have seen it. Why?

Read on to find out


If You Start Your Tweet with @name, Few Will See It

This is so obvious, yet little known – and although Mark Suster warned us all right here at CloudAve, I keep on falling in the trap.  Just today as I wanted to announce yet another great post by Mark, I tweeted this:

@msuster discusses how the Ice Age is thawing for Venture Capital

Big mistake.  Had I written “great discussion by @msuster”, a lot more people would have seen it. Why?   I’ll just quote the key chapter from Mark’s original tutorial:

This is important … If you send somebody a message and you START it with an @name then the only people who will see your message are people who follow you and people who follow the person you replied to.  Most people don’t seem to know this.  For example, if you follow me but not @deblanda an I send her a message starting with an @ then you won’t see it at all.  Anyone who follows both of us will see the message.  If you precede the message by anything, even a dash and a space like, “- @deblanda nice to see you” then everybody will see it.

When does this come into play?  Sometimes I’ll see people who want to make people aware of a blog posting.  They’ll say “@msuster provides great insight into VC valuation discussions – see” .  They might have 2,000 followers.  I have 1,200.  Only the small subset who follow both of us, say 100, will see the message.

So if you’re really responding to somebody and you don’t want all your followers to see it (but you don’t necessarily want to send a private message via DM or you can’t because they don’t follow you) then start with an @.  Otherwise make sure it has text in front of it.

(Cross-posted @ CloudAve )


How to Prune Twitter Spammers if Your Account is Compromised

I’ve just received one of those “Hey, I just added you to my Mafia family. You should accept my…” crap-spam-junk invitations on Twitter. Normally these come from accounts I don’t recognize, and I either ignore or block them.  But this time it came from a gray-haired, well-respected industry analysts – I just could not imagine him getting involved.  When I contacted him he told me he himself received 75 Mafia invitations – but the fact that I received it in his name suggests his account got compromised.

He had already changed his Twitter password, yet the hijackers kept on using his account.  That reminded me to share this: changing your password is no longer sufficient to regain control.  I don’t pretent to be a security expert (which we have a few over @ CloudAve), but since more and more Twitter apps are “doing the right thing” and use OAuth authentication, those connections stay valid even after a password change.  So here’s what you should do: go to and check out all the applications listed there.

You may be surprised… the stupid lil’ thing you had checked out and decided you did not like after 5 minutes still sits there, fully authorized.  So do yourself a favor, prune the list.  Whatever you don’t recognize, or no longer want, click “revoke access” – it’s that simple.

(Note: The image above does not depict “bad guys”. it’s a screen shot of my account and I don’ have any – or so I hope.)

(Cross-posted @ CloudAve )


Twitter Valued at $1B? Peanuts! 37Signals Worth $100B!

37signals is now a $100 billion dollar company, according to a group of investors who have agreed to purchase 0.000000001% of the company in exchange for $1.

Founder Jason Fried informed his employees about the new deal at a recent company-wide meeting. The financing round was led by Yardstick Capital and Institutionalized Venture Partners.

In order to increase the value of the company, 37signals has decided to stop generating revenues. “When it comes to valuation, making money is a real obstacle. Our profitability has been a real drag on our valuation,” said Mr. Fried. “Once you have profits, it’s impossible to just make stuff up. That’s why we’re switching to a ‘freeconomics’ model. We’ll give away everything for free and let the market speculate about how much money we could make if we wanted to make money. That way, the sky’s the limit!”

Hilarious… but I’m not quoting the whole thing, this already stretches the limits of Fair Use, so go ahead and read the original.

Update:  I think $37B would have been more appropriate valuation, but I understand Jason does not want to leave small change on the table

(Cross-posted @ CloudAve )


Brits, the Masters of the Universe… the Facebook Universe

University of Salford

Image via Wikipedia

The University of Salford in Manchaster will offer a Masters degree in Social Media, focusing on Facebook and Twitter.

Salford claims to be the world’s first to offer a Masters course in social media, but they are not.  That title goes to Birmingham City University which announced their one-year course in Social Media in March. For a cool £4,400 ($7,200) you get a Master’s Degree of … well, let’s just say questionable value. 

Continue reading


Business Planning on Twitter

As with all-things-Twitter, you should read this bottom-up:


And the text summary – again, read from bottom up:

  • amandagbeals @bencasnocha love the biz idea but dont leave out the gays!!! they wld be ur biggest clients!
  • zolierdos @bencasnocha On second thought, this business model is one of the oldest, although not limited to kissing 🙂
  • djnotfound @bencasnocha but… but can they get pregnant by kissing?
  • zolierdos @bencasnocha Haha, will it be bootstrapped or VC funded? 🙂
  • constantmotion @bencasnocha I have to ask, did a specific experience lead to this idea?
  • jeffnolan @bencasnocha you could rely on craigslist as your go-to-market strategy
  • msimonkey @bencasnocha Who decides whos the expert?
  • bencasnocha Business idea: create a kissing school where people pay to practice kissing "expert" instructor of opposite sex and get immediate feedback.