Those of you who follow this blog know that I believe Facebook’s servers are reaching their limits. In June 2014, when there was a 69-hour outage for meâand at least 30 minutes for most other Facebook usersâI noted I was recording a marked increase in Facebook bugs before the crash. And the even longer outage yesterdayâsome reports say it was 35 minutes but some media have reported it was up to 90âwas also prefaced by some curious bugs that were identical to the earlier ones.
I thought it was very odd that in all the articles I have read today about the issue, no media have been able to get a comment from Facebook. It made me wonder if people had clammed up because of what it could mean for the share price.
And I do realize how preposterous my theory sounds, as the logical thing to ask is: how could a company the size of Facebook not be equipped to handle its growth?
Well, how could a company the size of Facebook not be equipped to deal with time zones outside the US Pacific? And we know a company the size of Google is not equipped to deal with the false malware warnings it sends out.
However, the geeks have reported. There are two at the Facebook developers’ status page that relate to the outage.
If you can understand the technobabble, they are: âTrafïŹc and error-rates are almost back to normal after a coordinated intervention by our engineering teams. We are now monitoring the situation and we have our best engineers determining the root-cause of this issue that affected much of our web ïŹeet. We apologize for any inconvenience and we aim to ensure that this issue does not repeat,â and âPlatform has been stable for >5 hours and our engineers have reproduced the complex issue that was causing many of our www/api servers to run out of resources. The team is now working on the ïŹnal ïŹx, but we are conïŹdent that there will be no further regression. Thank you for your patience and we apologize for any issues that we caused for your apps. Have a great weekend.â
If I understand them correctly, the second actually says that the servers ran out of resources.
Hopefully, the above means Facebook has fixed the error, which I believe to be the same as the one in June. Facebook itself had then discounted that it was an attack.
No wonder no one has offered the media a comment, if the site is falling over so regularly because of its bugs.
Posts tagged ‘errors’
Long before Google started pissing me off with its various funny acts (such as spying on users without their consent), it released a program called Google Earth. I installed it in July 2009 on my laptop, and decided to feed in 1600 Pennsylvania Avenue NW, Washington, DC 20009, just to see how it had rendered the White House. Other than various Wellington locales, that was my first search query. This was the result, confirmed by others at the time:
There’s no White House there, unless when the Google Earth people made the program, aliens had beamed up the entire block temporarily.
Google has since fixed this. However, back in 2009, it didn’t know where the White House was. And here I was, thinking that it was an American program, where those working on it would double-check where its most famous building stood. This was four years after Google Earth was released.
So any time people say that a big company full of techs must know more than an individual, think of this example, and some others I’ve posted over the years.
The same lesson, I might add, applies to big countries versus small countries. Big definitely doesn’t mean right. The key for the small countries often is to outmanĆuvre the large ones, by being more inventive and more innovative.
God, I love New Zealand.
But why? Here’s what Google says:
which means: we can’t find anything wrong with this site since April 8, even though our last scan was on the 23rd. Really? There has been nothing wrong for 15 days, but you’ll still block our site? (Note: Google did not block this site on the 23rd.)
Let’s go to Google Webmaster Tools to see what it says there:
That’s right: nothing. There’s nothing wrong with the site.
Maybe we’ve been flagged somewhere else, then? How about Stop Badware?
Nope, we’re all fine there, too.
In fact, even Google is wrong when it says there were problems on April 8âanother sign of its malware bot reading from a cache instead of fresh pages, because we say we fixed everything on April 6. Well, here’s what Google itself says about Autocade when you go into Webmaster Tools in more depth:
which correlates with the claims we have made all along: our ad server got hacked on April 6 (NZST), and we sorted it within hours that day.
We’re interested to see if the false malware warnings can carry on for a monthâafter all, Google will block a blog for six months even though it says it will lift a block in 48 hours after an investigation. Things take a bit longer there than they claim. There’s a case of one gentleman who has had his site blocked by Google for two months for no reason. I’m sure many, many others are being wrongly identified by Googleâand there are far too many companies relying on the Californian company’s hypocrisy in identifying malware.
The Google belief that webmasters are wrongly claiming there to be false positives is looking more dubious by the day.
PS.: The last post at this forum entry is interesting: Google blocks a website based on stale data. The website where the malware allegedly was did not even exist, but it still triggered a warning at Google. The webmaster writes, ‘The site concerned doesn’t exist and more to the point, there is no DNS record for it eitherâso it cannot exist. / The IP which was once assigned to it is now assigned to someone else.’ That was in March. Judging by the articles online, Google’s been having problems with this particular bot since the beginning of 2013. The sooner they retire the program, the better, I say.âJY
Not a political post, sorry. This one follows up from the Google boycott earlier this month and is further proof of how the house of G gets it very, very wrong when it comes to malware warnings.
As those who followed this case know, our ad server was hacked on April 6 but both my web development expert, Nigel Dunn, and I fixed everything within hours. However, Google continued to block any website linking to that server, including this blogâwhich, as it turned out, delayed my mayoral campaign announcement sufficiently for things to go out on the same day as the marriage equality bill’s final reading and Baroness Thatcher’s funeralâand any of our websites carrying advertising. Lucire was blacklisted by Google for six days despite being clean, and some of our smaller websites were even blocked for weeks for people using Chrome and Firefox.
We insisted nothing was wrong, and services such as Stop Badware gave our sites the all-clear. Even a senior Google forum volunteer, who has experience in the malware side of things, couldn’t understand why the block had continued. There’s just no way of reaching Google people though, unless you have some inside knowledge.
We haven’t done any more work on the ad server. We couldn’t. We know it’s clean. But we eventually relented and removed links to it, on the advice of malware expert Dr Anirban Banerjee, because he believed that Google does get it wrong. His advice: remove it, then put it back after a few days.
The problem is, Google gets it wrong at the expense of small businesses who can’t give it sufficient bad publicity to shatter its illusory ‘Don’t be evil’ claim. It’s like the Blogger blog deletions all over again: unless you’re big enough to fight, Google won’t care.
Last night, we decided to put back the old codeâthe one that Google claimed was dodgyâon to the Lucire Men website. It’s not a major website, just one that we set up more or less as an experiment. Since this code is apparently so malicious, according to Google, then it would be logical to expect that by this morning, there would be warnings all over it. Your browser would exclaim, ‘You can’t go to that siteâyou will be infected!’
Guess what? Nothing of the sort has happened.
It’s clean, just as we’ve been saying since April 6.
And to all those “experts” who claim Google never gets it wrong, that the false positives that we netizens claim are all down to our own ignorance with computing, well, then, there’s proof that Google is fallible. Very fallible. And very harmful when it comes to small businesses who can lose a lot of revenue from false accusations. Even we had advertising contracts cancelled during that period because people prefer believing Google. One ad network pulled every single ad they had with Lucireâs online edition.
People are exposed to its logo every day when they do a web search. And those web searches, they feel, are accurate and useful to them, reinforcing the warm fuzzies.
Can we really expect a company that produces spyware (and ignores red-flagging its own, naturally) to be honest about reporting the existence of malware on other people’s websites? Especially when the code the hackers used on April 6 has Google’s name and links all over it?
It can be dangerous, as this experience has illustrated, to put so much faith in the house of G. We’ll be steadily reintroducing our ad server code on to our websites. While we’re confident we’re clean, we have to wear kid gloves dealing with Google’s unpredictable manner.
No matter how bad you think you’ve got it, some poor bugger has it worse. One webmaster, Steven Don, has had Google claim that he has anywhere between nine and fourteen trojans on his website, but he has none. The Google Safe Browsing page claims nine trojans presently, but can’t say which domains he has supposedly infected.
If you read through the page, like our own Nigel Dunn, he’s no amateur at this stuff.
He has rebuilt the sites from scratch, and compared the files he has with the ones on the server, and there are no differences. Yet Google refuses to acknowledge that his site is clean after two months.
The only things he cannot vouch for himself are the Google Analytics and Google Adsense codes, and the Google Plus One button. And that makes me wonder about Google Adsense once again.
Day six of the Google boycott: if The New York Times isn’t safe from blacklisting, then how can we be?11.04.2013
It’s day six on the Google blacklist for Lucire. And no, we still don’t know what they are talking about. StopBadware doesn’t know what they are talking about. Our web guys and all our team in different parts of the world don’t know what they are talking about.
Today, I decided to venture to the Google forums. Google forums are generally not a good place to go to, based on my experience with Blogger, but I came across a really helpful guy called Joe (a.k.a. Redleg x3), a level 12 participant, who has gone some way to redeeming them.
I told Joe the same story. He begins writing, ‘First I think you really need an explanation from Google, I can see why your site was flagged originally but do not understand why Google did not clear it today.’
Exactly. But what was fascinating was that when he checked through a private version of aw-snap.info, which helps you see what malware spiders see, he found the old Google Adsense code the hackers injected.
This very code has been absent from our servers since Saturday, otherwise we would never have received the all-clear from StopBadware.org. We also don’t use a caching service any more (we used to use Cloudflare). But, if Google saw what Joe did, then it means Google’s own bot can’t load fresh files. It loads cached ones, which means it keeps red-flagging stuff that isn’t there.
If you read between the lines of what Joe wrote, then it’s clear that Google relies on out-of-date data for its malware bot. He checked the infected site and the file that caused all the problems has gone. And we know the hacks are gone from our system. It’s totally in line with what we were told by Anirban Banerjee of Stopthehacker.com on the errors that Google makes, too. I can only conclude that it’s acceptable for Google to publish libel about your site while relying on outdated informationâinformation that it gathered for a few hours six days ago, which has no relevance today.
We still don’t know if things are sorted yet. We know this has been a devilishly frustrating experience, and damaging to our reputation and our finances. Yet we also know Google will just shrug its shoulders and do a Bart Simpson: ‘I didn’t do it.’ It’ll get blamed on the computer, which is terribly convenient. It’ll also blame covering up my Google Plus status criticizing them on the computer.
It looks like we are not alone. I’ve been reading of The New York Times and The Guardian getting red-flagged. Google even decided to blacklist YouTube at one point this year (given where I think the hackers’ code comes from, I am not surprised a Google property is malicious). The difference is that the big guys are more noticeable, so Google whitelists them more quickly. Our situation actually mirrored what happened at ZDNet, except they got cleared within hours (even though we fixed our problem within hours). The little guy, the honest business person, the legitimate blogger, the independent online store-ownerâwe’re in for a much harsher ride.
With Google supplying its corrupted data to other security programs like Eset as well as browsers such as Chrome and Firefox, then putting all your eggs in one basket is terribly dangerous, as we have seen. More so if that organization has no real oversight and your complaints are silenced. And as we have seen, Google will go to great lengths to preserve its advantages in the online advertising market.
Frustrated with ongoing Google’s false accusations over our websites, I joined the Stop Badware community today (Badware Busters), and got some sensible advice from a Dr Anirban Banerjee of www.stopthehacker.com.
He had checked what Google was on about, and noted that it was still making the same accusations it did on Saturdayâwhen we know that we had already removed the hack that day.
I told him this, and he replied:
One policy that a customer followed since Google was just not letting them off the blacklist inspite of cleaning the server, DB, etc.. was to âsuspend/removeâ all ad code pointing to the mother pipe (your main server in your case) â get the request for reviews pushed in asap, get the sites off the blacklist (since Google did not see any openx ads, nothing to analyze, hence the sites were let off within 5 hours) â then put the ads back again.
They used a simple grep command to strip out the ad code, and then restored the pages and code from a relatively fresh backup once the blockages were lifted.
I know this is kind of hack-ish â but sometimes inspite of all the meticulous cleaning that people do â automated system will flag sites.
In other words, Google can cock up. This time, it did. So you basically need to fool Google, get your site off the blacklist, and put things back to normal afterwards.
Or: there may be a drunk driver swerving left and right at the wheel of the Google truck, so it’s your job to make sure that you build a nice road in front for them, rather than insist that they clean up their act and stay on the road.
Mind you, the last time Google claimed to analyse something in two days, it took six monthsâhere’s hoping we’re back online before then. It’s getting embarrassing telling clients what had happened, especially as most drink the Google Kool-Aid and believe the firm can do no wrong. Peel back only one layer, and you can see plenty that goes wrong.
It’s not fair, but what can you do against the Google juggernaut when so many people rely on it, especially Chrome users who are getting the false red flags more than anyone else?
Facebook received this bug report from me today (the âSincerely,â etc. at the end have been omitted).
I know youâve said that the bug reported in the media about private messages going on to walls between 2007 and 2009 cannot be confirmed, but it has happened.
Back in those days, with âUser Name isâ, I wrote in the third person. Yet I can find these allegedly public posts in the first person on othersâ walls.
Before you introduced private message threading, people often took excerpts from a previous message in their replies. I can see those, too.
Your investigations will have shown that these messages cannot be found in usersâ PMs. They will also have shown that they were public to begin with. I can confirm that with a full data download in October 2011, I saw exactly the same thing as you.
This leads me to believe that some of these PMs were incorrectly classified at some stage, leading to their recent publication.
I even know of a case where a contractual dispute done in DMs was published.
After Timeline was introduced in September 2011, I spent a lot of time looking at previous years, because I was fascinated about how you did the annual summaries of the most significant posts (and the most significant new friendships). I distinctly remember that the number of messages on our walls increased per annum. Right now, the sequence decreases between 2007 and 2011, beginning with 786 messages in 2007. I know for a fact that that number was not 786 when Timeline was first introduced and I have a photographic memory.
Please donât dismiss users and say that we donât know the difference between DMs and wall posts. Most of us do, and there are many signs that these messages are privateâmaybe not in the way you have categorized them now, but certainly in the way they once were categorized and in the context and manner of those messages.
I’d urge everyone to check their Facebooks. While I thought the first reports about this were hoaxes (and Snopes continues to report that they are, and the US mainstream media have taken Facebook’s side), I’ve taken a look at my Facebook and the structure of some of these “public” 2007â9 messages are akin to private ones. Better yet, check your own and see if your private messages have been broadcast.
PS.: At one netizen’s suggestion, I looked back through my 2007 notifications and can confirm what Facebook saysâat least for messages before August 1, 2007 (the day I turned off wall post notifications). Every notification correlates with a wall post or a wall-to-wall. I’m still convinced the annual summary that year showed far fewer than 786, so my only conclusion there is that Facebook must have not shown a lot of the wall-to-walls.âJY
After I got back from India, my desktop computer went into meltdown. This was Nigel Dunn’s old machine, which I took over after he went to Australia, and it gave me excellent service for over two years.
I wasn’t prepared to go and buy a brand-new machine, but having made the plunge, I’m glad I did. The installation went rather well and the only major problem was Wubi and Ubuntu, which, sadly, did not do what was promised. The installer failed, the boot sequence either revealed Linux code or a deep purple screen, and the time I spent downloading a few programs to sample was wasted (not to mention the two hours of trying to get Ubuntu to work). Shame: on principle, I really wanted to like it.
Funnily enough, everything on the Microsoft end went quite well apart from Internet Explorer 9 (the same error I reported last year), which then seemed to have taken out Firefox 9 with the same error (solved by changing the compatibility mode to Windows XP). Eudora 7.1 had some funny changes and would not load this morning without ﬁddling with the shortcut, Windows 7 forgot to show me the hidden ﬁles despite my changing the setting thrice, and there were some other tiny issues not worth mentioning. But, I am operating in 64-bit land with a lot of RAM, DDR5s on the graphics’ card, and more computing power than I could have imagined when, in 1984, my father brought home a Commodore 64, disk drive, printer and monitor, having paid around NZ$100 more than I did on Tuesday.
I could have gone out and bought the computer last week, after the old machine died. But there’s the whole thing about New Year. The focus was family time, preparing food and pigging out for New Year’s Eve (January 22 this time around), and New Year’s Day is deﬁnitely not one for popping out and spending money.
Which brings me to my next thought about how immigrant communities always keep traditions alive. You do have to wonder whether it’s still as big a deal “back home”: I was in Hong Kong brieﬂy en route back to Wellington, and you didn’t really feel New Year in the air. There was the odd decoration here and there, but not what you’d imagine.
It’s the Big Fat Greek Wedding syndrome: when the ﬁlm was shown in Greece, many Greeks found it insulting, portraying their culture as behind the times and anachronistic, while they had moved on back in the old country. The reality was a lot more European, the complainants noted.
And you see the same thing with the Chinese community. People who would never have given a toss about the traditions in the old country suddenly making them out to be sacrosanct in the new one. Maybe it’s motivated by a desire to transmit a sense of self to the next generation: in a multicultural society, you would hope that youngsters have the chance to pick and choose from the best traditions from both their heritage and their new nation, and carry them forward.
A retro note: I love Fontographer 3.5. So I put it on a virtual machine running XP. Fun times, courtesy of Conrad Johnston, who told me about Oracle VM Virtual Box.
I also found a great viewer, XnView, to replace the very ancient ACDSee 3.1 that I had been using as a de facto ﬁle manager. (Subsequent versions were bloatware; XnView is freeware and does nearly the same thing.) I’ve ticked almost all the boxes when it comes to software.
Because of the thoroughly modern set-up, I haven’t been able to put in a 3Âœ-inch ﬂoppy as threatened on Twitter. Fontographer was transferred on to a USB stick, though I have yet to play with it properly inside the virtual machine. Both the Windows 7 and virtual machines are, in typical fashion, Arial-free.
Although I have seen VMs before, I am still getting a buzz out of the computer-within-a-computer phenomenon.
To those who expected me to Tweet doom and gloom from my computing experience last night, I’m sorry I disappointed you. My posts about technology, whether written on this blog or on Twitter, are not to do with some belief in a computing industry conspiracy, as someone thought. The reason: to show that even this oh-so-logical profession is as human as the next. Never, ever feel daunted because of someone’s profession: we are all human, and we are all fallible. Sometimes I like reminding all of us of that: in fact, the more self-righteous the mob, the more I seem to enjoy bringing them down to a more realistic level, where the rest of us live. We’re all a lot more equal in intellect than some would like to think, and that assessment goes right to the top of the political world.