Some interesting bugs out there on Facebook that my friends are telling me about. One has been removed from all her groups, including one that I run (we never touched her account), another cannot comment any more (an increasingly common bug now), while Felicity Frockaccino, well known on the drag scene locally and in Sydney, saw her account deleted. Unlike LaQuisha St Redfern’s earlier this year, Felicity’s has been out for weeks, and it’s affected her livelihood since her bookings were in there. Facebook has done nothing so far, yet I’ve since uncovered another bot net which they have decided to leave (have a look at this hacked account and the bots that have been added; a lot of dormant accounts in Japan and Korea have suffered this fate, and Facebook has deleted most), despite its members being very obviously fake. Delete the humans, keep the bots.
Felicity didn’t ask but I decided to write to these people again, to see if it would help. There was a missing word, unfortunately, but it doesn’t change the sentiment:
Guys, last year you apologized to drag kings and queens for deleting their accounts. But this year, you have been deleting their accounts. This is the second one that I know of, and I donât know that many drag queens, which suggests to me that you [still] have it in for the drag community.
Felicity Frockaccino is an international drag performer, and youâve affected her livelihood as her bookings were all in that account. This is the second time you deleted her, despite your public apology and a private one that you sent her directly. What is going on, Facebook? You retain bots and bot nets that I report, but you go around deleting genuine human users who rely on you to make their living. Unlike LaQuisha Redfernâs account, which you restored within days, this has been weeks now.
That’s right, she even received a personal apology after her account was deleted the first time. I had hoped that Facebook would have seen sense, since Felicity has plenty of fans. The first-world lesson is the same here as it is for Blogger: do not ever rely on Facebook for anything, and know that at any moment (either due to the intentional deletion on their end or the increasing number of database-write issues), your account can vanish.
Meanwhile, my 2012 academic piece, now titled âThe impact of digital and social media on brandingâ, is in vol. 3, no. 1, the latest issue of the Journal of Digital and Social Media Marketing. This is available via Ingenta Connect (subscription only). JDSMM is relatively new, but all works are double-blind, peer-reviewed, and it’s from the same publisher as The Journal of Brand Management, to which I have contributed before. It was more cutting-edge in 2012 when I wrote it, and in 2013 when it was accepted for publication and JDSMM promoted its inclusion in vol. 1, no. 1, but I believe it continues to have a lot of merit for practitioners today. An unfortunate, unintentional administrative error saw to its omission, but when they were alerted to it, the publisher and editors went above and beyond to remedy things while I was in the UK and it’s out now.
Letâs see: Facebook doesnât work on Wednesdays and Fridays. Check. Thursdays are OK though.
Itâs another one of those days where the Facebook bug that began on Wednesday (though, really, itâs been going on for yearsâincluding the famous outage of 2013 where what I am experiencing happened worldwide to a large number of users) has decided to resurface and spread. Not only can I no longer like, comment, post or share without repeated attempts, I cannot delete (Facebook makes me repeat those attempts even when a post has been successful, but doesnât show me those till an hour later) or upload photos to messaging without repeated attempts.
The deletion is the hardest: while commenting will work after three to twelve repeats, deletion does not work at all. The dialogue box emerges, and you can click âDeleteâ. The button goes light for a while, then itâs back to the usual blue.
And this happens regardless of platform: Mac, Windows, Firefox, Opera, Android, inside a virtual machine, you name it. Javaâs been updated as have the browsers on my most used machines; but it seems the configurations make no difference.
I am reminded how a year ago I had even less on Facebook. Quite a number of users were blocked for days (Facebook isnât open on weekends, it seems), but eventually the message got through and things started working again.
My theory, and Iâd be interested to learn if it holds any water, is that older or more active accounts are problematic. I mean, if spammers and spambots have more rights than legitimate users, then something is wonky; and the only thing I can see that those T&C-violating accounts have over ours is novelty. Facebook hasnât got to them yet, or it tacitly endorses them.
As one of the beta users on Vox.com many years ago, I eventually found myself unable to compose a new blog post. Itâs an old story which I have told many times on this blog. Even Six Apart staff couldnât do it when using my username and password from their own HQ. But, they never fixed it. It was a âshrug your shouldersâ moment, because Vox was on its way out anyway at the company. (The domain is now owned by another firm, and is a very good news website.) Unlike Facebook, they did have theories, and tried to communicate with you to fix the issue. One woman working there wondered if I had too many keywords, and I had reached the limit. I deleted a whole lot, but nothing ever worked. It suggested that these websites did have limits.
Computer experts tell me that itâs highly unlikely Iâve reached any sort of limit on Facebook, because of how their architecture is structured, but Iâm seeing more and more of these bugs. But we are talking about a website thatâs a decade old. My account dates back to 2007. Data will have been moved about and reconstituted, because the way they were handled in 2007 is different to how they are handled now. There have been articles written about this stuff.
What if, in all these changes over the last eight years (and beyond), Facebook screwed up data transfers, corrupting certain accounts? Itâs entirely conceivable for a firm that makes plenty of mistakes and doesnât even know what time zones are. Or deletes a complainantâs account instead of the pirateâs one that she complained about. (This has been remedied, incidentally, the day after my blog post, and a strongly worded note to Facebook on behalf of my friend.)
The usual theory I hear from those in the know is that certain accounts are on certain servers, and when those are upgraded, some folks will experience difficulties. That seems fair, but I would be interested to know just what groups us together.
Last time I downloaded all my data off Facebook, and this was several years ago, I had 3 Gbyte. It wouldnât surprise me in the slightest that that was now 6 Gbyte. Thatâs a lot to handle, and when you multiply that by millions, some will result in buggy accounts. Ever had a hard drive with dodgy fragments? Or a large transfer go wrong? Facebook might have better gear than us, but itâs not perfect.
I donât believe for a second that certain people are targetedâa theory I see on forums such as Get Satisfaction, with Republicans blaming Democrats and Democrats blaming Republicansâbut I do believe that something binds us together, and it is buried within the code. But, like Vox, it may be so specific that thereâs nothing their boffins can do about it. You simply have to accept that some days, Facebook does not let you post, comment, like, share, delete or message. The concern is that this, like random deletions, can happen to anyone, because these bugs never seem to go away. Looking at my own record on Get Satisfaction, they are increasing by the year.
Since the Firefox for Windows updates in November, I’ve had a big problem with the Mozilla browser, and the Waterfox 64-bit version based on it: they won’t display text. I had to downgrade to Waterfox 32.0.3 for the last month or so, but it’s begun crashing more and more regularly (from once a day to thrice todayâI visit largely the same sites, so why does software “decay” like this?).
On the latest incarnations of Firefox and Waterfox, linked fonts work, but the majority of system fonts vanished from the browser. And, for once, I’m not alone, if Bugzilla is any indication. It is probably related to a bug I filed in 2011.
I’ve had some very helpful people attend to the bug reportâit’s great when you get into Bugzilla where the programming experts resideâbut sadly, a lot of the fixes require words. And, unfortunately, those are the things that no longer displayed in Firefox, not even in safe mode.
As many of you know, there’s no way I’d switch to Chrome (a.k.a. the ‘Aw, snap!’ browser) due to its frequent crashes on my set-up, and its memory hogging. There’s also that Google thing.
After some searching tonight, I came across Cyberfox. It’s not a Firefox alternative that comes up very often. Pale Moon is the one that a lot of people recommend, but I have become accustomed to Firefox’s Chrome-like minimalism, and wanted something that had a Firefox open-source back end to accompany it. Cyberfox, which lets you choose your UI, has the familiar Firefox Australis built in.
I made the switch. And all is well. Cyberfox forces you to make a new profile, something that Waterfox does not, but there isn’t much of an issue importing bookmarks (you have to surf to the directory where they are stored, and import the JSON file), and, of course, you have to get all your plug-ins and do all your opt-outs again. It also took me a while to program in my cookie blocks. But the important thing is: it displays text.
You’d think that was a pretty fundamental feature for a web browser.
The text rendering is different, and probably better. I’ve always preferred the way text is rendered on a Macintosh, so for Cyberfox to get a bit nearer that for some fonts is very positive. It took me by surprise, and my initial instinct was that the display was worse; on review, Firefox displayed EB Garamond, for example, in a slightly bitmapped fashion; Cyberfox’s antialiasing and subpixel rendering are better.
Firefox and Waterfox on Windows 7
Cyberfox on Windows 7
Here’s where the above text is from.
Gone is the support for the old PostScript Type 1 fonts (yes, I still have some installed) but that’s not a big deal when almost everything is TrueType and OpenType these days.
The fact Cyberfox works means one of two things: (a) Cyberfox handles typography differently; or (b) as Cyberfox forces us to have a new profile, then there is something in the old profiles that caused Firefox to display no text. That’s beyond my knowledge as a user, but, for now, my problems seem to be solvedâat least until someone breaks another feature in the future!
PS.: That lasted all of a few hours. On rebooting, Cyberfox does exactly the same thing. All my text has vanished, and the rendering of the type has changed to what Firefox and Waterfox do. No changes to the settings were made while the computer was turned off, since, well, that would be impossible. Whomever said computers were logical devices?
Of yesterday’s options, (a) is actually correctâbut how do we get these browsers behaving the way they did in that situation? In addition, the PostScript Type 1 fonts that the browser was trying to access have since been replaced.
Those of you who follow this blog know that I believe Facebook’s servers are reaching their limits. In June 2014, when there was a 69-hour outage for meâand at least 30 minutes for most other Facebook usersâI noted I was recording a marked increase in Facebook bugs before the crash. And the even longer outage yesterdayâsome reports say it was 35 minutes but some media have reported it was up to 90âwas also prefaced by some curious bugs that were identical to the earlier ones.
I thought it was very odd that in all the articles I have read today about the issue, no media have been able to get a comment from Facebook. It made me wonder if people had clammed up because of what it could mean for the share price.
And I do realize how preposterous my theory sounds, as the logical thing to ask is: how could a company the size of Facebook not be equipped to handle its growth?
Well, how could a company the size of Facebook not be equipped to deal with time zones outside the US Pacific?And we know a company the size of Google is not equipped to deal with the false malware warnings it sends out.
However, the geeks have reported. There are two at the Facebook developers’ status page that relate to the outage.
If you can understand the technobabble, they are: âTrafïŹc and error-rates are almost back to normal after a coordinated intervention by our engineering teams. We are now monitoring the situation and we have our best engineers determining the root-cause of this issue that affected much of our web ïŹeet. We apologize for any inconvenience and we aim to ensure that this issue does not repeat,â and âPlatform has been stable for >5 hours and our engineers have reproduced the complex issue that was causing many of our www/api servers to run out of resources. The team is now working on the ïŹnal ïŹx, but we are conïŹdent that there will be no further regression. Thank you for your patience and we apologize for any issues that we caused for your apps. Have a great weekend.â
If I understand them correctly, the second actually says that the servers ran out of resources.
Hopefully, the above means Facebook has fixed the error, which I believe to be the same as the one in June. Facebook itself had then discounted that it was an attack.
No wonder no one has offered the media a comment, if the site is falling over so regularly because of its bugs.
Mea culpa: OK, I was wrong. Facebook got things back up in about 20 minutes for some users, who are Tweeting about it. However, as of 8.37 a.m. GMT, I am still seeing Tweeters whose Facebooks remain down.
Looks like some people do work there after hours. What a surprise!
However, I reckon things aren’t all well there, with two big outages in such a short space of timeâand I stand behind my suspicions that Facebook has reached some sort of limit, given the increase in bug reports and the widespread nature of the outage tonight.
That didn’t last long, did it?
Facebook returned late morning on Tuesdayâas predicted, it would only be back once the folks at Facebook, Inc. got back to work at 9 a.m. on Monday and realized something had run amok.
Now it’s Thursday night NZST, and if Twitter’s to be believed, a lot of people globally can no longer access Facebook. This is a major outage: it seems one of every few Tweets is about Facebook being down.
Just over two days, and it’s dead again.
Looks like I wasn’t wrong when I wondered whether that I had hit a limit on Facebook. To be out for nearly three days suggests that there was something very wrong with the databasing, and the number of people affected were increasing daily.
And when you look at the bugs I had been filing at Get Satisfaction, there has been a marked increase of errors over the past few weeks, suggesting that there was some instability there.
For it to have such a major failing now, after being out for some users this weekend, doesn’t surprise me. This time, groups and Messenger have been taken out, too.
Facebook really should have taken note of the errors being reported by users.
My experience with Vox was very similar, although there the techs couldn’t get me back online. They gave up at the end of 2009. The similarities are striking: both sites had databasing issues but only with certain users; and both sites were overrun with spammers creating fake accounts. That’s one thing that did piss me off: spammers having more privileges than a legitimate user.
Well, we can probably wait till 9 a.m. PDT when they get back to work. It may say, ‘We’re working on getting this fixed as soon as we can,’ in the error message, but as far as I can make out from what happened to me, Facebook is a MondayâFriday, 9â5 operation, not a 24-hour, seven-day one.
At least it died on a weekday: we can count ourselves lucky.
Long before Google started pissing me off with its various funny acts (such as spying on users without their consent), it released a program called Google Earth. I installed it in July 2009 on my laptop, and decided to feed in 1600 Pennsylvania Avenue NW, Washington, DC 20009, just to see how it had rendered the White House. Other than various Wellington locales, that was my first search query. This was the result, confirmed by others at the time:
There’s no White House there, unless when the Google Earth people made the program, aliens had beamed up the entire block temporarily.
Google has since fixed this. However, back in 2009, it didn’t know where the White House was. And here I was, thinking that it was an American program, where those working on it would double-check where its most famous building stood. This was four years after Google Earth was released.
So any time people say that a big company full of techs must know more than an individual, think of this example, and some others I’ve posted over the years.
The same lesson, I might add, applies to big countries versus small countries. Big definitely doesn’t mean right. The key for the small countries often is to outmanĆuvre the large ones, by being more inventive and more innovative.
God, I love New Zealand.
That was an interesting experiment. Although Lucire Men is still clear (for now), Google decided it would play silly buggers a few hours after we put our (clean) ad server code back on Autocade:
But why? Here’s what Google says:
which means: we can’t find anything wrong with this site since April 8, even though our last scan was on the 23rd. Really? There has been nothing wrong for 15 days, but you’ll still block our site? (Note: Google did not block this site on the 23rd.)
Let’s go to Google Webmaster Tools to see what it says there:
That’s right: nothing. There’s nothing wrong with the site.
Maybe we’ve been flagged somewhere else, then? How about Stop Badware?
Nope, we’re all fine there, too.
In fact, even Google is wrong when it says there were problems on April 8âanother sign of its malware bot reading from a cache instead of fresh pages, because we say we fixed everything on April 6. Well, here’s what Google itself says about Autocade when you go into Webmaster Tools in more depth:
which correlates with the claims we have made all along: our ad server got hacked on April 6 (NZST), and we sorted it within hours that day.
We’re interested to see if the false malware warnings can carry on for a monthâafter all, Google will block a blog for six months even though it says it will lift a block in 48 hours after an investigation. Things take a bit longer there than they claim. There’s a case of one gentleman who has had his site blocked by Google for two months for no reason. I’m sure many, many others are being wrongly identified by Googleâand there are far too many companies relying on the Californian company’s hypocrisy in identifying malware.
The Google belief that webmasters are wrongly claiming there to be false positives is looking more dubious by the day.
PS.: The last post at this forum entry is interesting: Google blocks a website based on stale data. The website where the malware allegedly was did not even exist, but it still triggered a warning at Google. The webmaster writes, ‘The site concerned doesn’t exist and more to the point, there is no DNS record for it eitherâso it cannot exist. / The IP which was once assigned to it is now assigned to someone else.’ That was in March. Judging by the articles online, Google’s been having problems with this particular bot since the beginning of 2013. The sooner they retire the program, the better, I say.âJY
Not a political post, sorry. This one follows up from the Google boycott earlier this month and is further proof of how the house of G gets it very, very wrong when it comes to malware warnings.
As those who followed this case know, our ad server was hacked on April 6 but both my web development expert, Nigel Dunn, and I fixed everything within hours. However, Google continued to block any website linking to that server, including this blogâwhich, as it turned out, delayed my mayoral campaign announcement sufficiently for things to go out on the same day as the marriage equality bill’s final reading and Baroness Thatcher’s funeralâand any of our websites carrying advertising. Lucire was blacklisted by Google for six days despite being clean, and some of our smaller websites were even blocked for weeks for people using Chrome and Firefox.
We insisted nothing was wrong, and services such as Stop Badware gave our sites the all-clear. Even a senior Google forum volunteer, who has experience in the malware side of things, couldn’t understand why the block had continued. There’s just no way of reaching Google people though, unless you have some inside knowledge.
We haven’t done any more work on the ad server. We couldn’t. We know it’s clean. But we eventually relented and removed links to it, on the advice of malware expert Dr Anirban Banerjee, because he believed that Google does get it wrong. His advice: remove it, then put it back after a few days.
The problem is, Google gets it wrong at the expense of small businesses who can’t give it sufficient bad publicity to shatter its illusory ‘Don’t be evil’ claim. It’s like the Blogger blog deletions all over again: unless you’re big enough to fight, Google won’t care.
Last night, we decided to put back the old codeâthe one that Google claimed was dodgyâon to the Lucire Men website. It’s not a major website, just one that we set up more or less as an experiment. Since this code is apparently so malicious, according to Google, then it would be logical to expect that by this morning, there would be warnings all over it. Your browser would exclaim, ‘You can’t go to that siteâyou will be infected!’
Guess what? Nothing of the sort has happened.
It’s clean, just as we’ve been saying since April 6.
And to all those “experts” who claim Google never gets it wrong, that the false positives that we netizens claim are all down to our own ignorance with computing, well, then, there’s proof that Google is fallible. Very fallible. And very harmful when it comes to small businesses who can lose a lot of revenue from false accusations. Even we had advertising contracts cancelled during that period because people prefer believing Google. One ad network pulled every single ad they had with Lucireâs online edition.
People are exposed to its logo every day when they do a web search. And those web searches, they feel, are accurate and useful to them, reinforcing the warm fuzzies.
Can we really expect a company that produces spyware (and ignores red-flagging its own, naturally) to be honest about reporting the existence of malware on other people’s websites? Especially when the code the hackers used on April 6 has Google’s name and links all over it?
It can be dangerous, as this experience has illustrated, to put so much faith in the house of G. We’ll be steadily reintroducing our ad server code on to our websites. While we’re confident we’re clean, we have to wear kid gloves dealing with Google’s unpredictable manner.
It’s day six on the Google blacklist for Lucire. And no, we still don’t know what they are talking about. StopBadware doesn’t know what they are talking about. Our web guys and all our team in different parts of the world don’t know what they are talking about.
Today, I decided to venture to the Google forums. Google forums are generally not a good place to go to, based on my experience with Blogger, but I came across a really helpful guy called Joe (a.k.a. Redleg x3), a level 12 participant, who has gone some way to redeeming them.
I told Joe the same story. He begins writing, ‘First I think you really need an explanation from Google, I can see why your site was flagged originally but do not understand why Google did not clear it today.’
Exactly. But what was fascinating was that when he checked through a private version of aw-snap.info, which helps you see what malware spiders see, he found the old Google Adsense code the hackers injected.
This very code has been absent from our servers since Saturday, otherwise we would never have received the all-clear from StopBadware.org. We also don’t use a caching service any more (we used to use Cloudflare). But, if Google saw what Joe did, then it means Google’s own bot can’t load fresh files. It loads cached ones, which means it keeps red-flagging stuff that isn’t there.
If you read between the lines of what Joe wrote, then it’s clear that Google relies on out-of-date data for its malware bot. He checked the infected site and the file that caused all the problems has gone. And we know the hacks are gone from our system. It’s totally in line with what we were told by Anirban Banerjee of Stopthehacker.com on the errors that Google makes, too. I can only conclude that it’s acceptable for Google to publish libel about your site while relying on outdated informationâinformation that it gathered for a few hours six days ago, which has no relevance today.
We still don’t know if things are sorted yet. We know this has been a devilishly frustrating experience, and damaging to our reputation and our finances. Yet we also know Google will just shrug its shoulders and do a Bart Simpson: ‘I didn’t do it.’ It’ll get blamed on the computer, which is terribly convenient. It’ll also blame covering up my Google Plus status criticizing them on the computer.
It looks like we are not alone. I’ve been reading of The New York Times and The Guardian getting red-flagged. Google even decided to blacklist YouTube at one point this year (given where I think the hackers’ code comes from, I am not surprised a Google property is malicious). The difference is that the big guys are more noticeable, so Google whitelists them more quickly. Our situation actually mirrored what happened at ZDNet, except they got cleared within hours (even though we fixed our problem within hours). The little guy, the honest business person, the legitimate blogger, the independent online store-ownerâwe’re in for a much harsher ride.
With Google supplying its corrupted data to other security programs like Eset as well as browsers such as Chrome and Firefox, then putting all your eggs in one basket is terribly dangerous, as we have seen. More so if that organization has no real oversight and your complaints are silenced. And as we have seen, Google will go to great lengths to preserve its advantages in the online advertising market.