Posts tagged ‘errors’


Switching to Cyberfox, after Waterfox and Firefox stopped displaying text

23.12.2014

Since the Firefox for Windows updates in November, I’ve had a big problem with the Mozilla browser, and the Waterfox 64-bit version based on it: they won’t display text. I had to downgrade to Waterfox 32.0.3 for the last month or so, but it’s begun crashing more and more regularly (from once a day to thrice today—I visit largely the same sites, so why does software “decay” like this?).

   On the latest incarnations of Firefox and Waterfox, linked fonts work, but the majority of system fonts vanished from the browser. And, for once, I’m not alone, if Bugzilla is any indication. It is probably related to a bug I filed in 2011.
   I’ve had some very helpful people attend to the bug report—it’s great when you get into Bugzilla where the programming experts reside—but sadly, a lot of the fixes require words. And, unfortunately, those are the things that no longer displayed in Firefox, not even in safe mode.
   As many of you know, there’s no way I’d switch to Chrome (a.k.a. the ‘Aw, snap!’ browser) due to its frequent crashes on my set-up, and its memory hogging. There’s also that Google thing.
   After some searching tonight, I came across Cyberfox. It’s not a Firefox alternative that comes up very often. Pale Moon is the one that a lot of people recommend, but I have become accustomed to Firefox’s Chrome-like minimalism, and wanted something that had a Firefox open-source back end to accompany it. Cyberfox, which lets you choose your UI, has the familiar Firefox Australis built in.
   I made the switch. And all is well. Cyberfox forces you to make a new profile, something that Waterfox does not, but there isn’t much of an issue importing bookmarks (you have to surf to the directory where they are stored, and import the JSON file), and, of course, you have to get all your plug-ins and do all your opt-outs again. It also took me a while to program in my cookie blocks. But the important thing is: it displays text.
   You’d think that was a pretty fundamental feature for a web browser.
   The text rendering is different, and probably better. I’ve always preferred the way text is rendered on a Macintosh, so for Cyberfox to get a bit nearer that for some fonts is very positive. It took me by surprise, and my initial instinct was that the display was worse; on review, Firefox displayed EB Garamond, for example, in a slightly bitmapped fashion; Cyberfox’s antialiasing and subpixel rendering are better.

Firefox and Waterfox on Windows 7
Firefox_Screenshot_2014-12-23T14-14-06.693Z

Cyberfox on Windows 7
Firefox_Screenshot_2014-12-23T14-13-23.809Z

Here’s where the above text is from.
   Gone is the support for the old PostScript Type 1 fonts (yes, I still have some installed) but that’s not a big deal when almost everything is TrueType and OpenType these days.
   The fact Cyberfox works means one of two things: (a) Cyberfox handles typography differently; or (b) as Cyberfox forces us to have a new profile, then there is something in the old profiles that caused Firefox to display no text. That’s beyond my knowledge as a user, but, for now, my problems seem to be solved—at least until someone breaks another feature in the future!

PS.: That lasted all of a few hours. On rebooting, Cyberfox does exactly the same thing. All my text has vanished, and the rendering of the type has changed to what Firefox and Waterfox do. No changes to the settings were made while the computer was turned off, since, well, that would be impossible. Whomever said computers were logical devices?
   Of yesterday’s options, (a) is actually correct—but how do we get these browsers behaving the way they did in that situation? In addition, the PostScript Type 1 fonts that the browser was trying to access have since been replaced.

Tags: , , , , , , , , , , , ,
Posted in internet, technology, typography | 3 Comments »


Has Facebook admitted its servers ran out of resources?

02.08.2014

Those of you who follow this blog know that I believe Facebook’s servers are reaching their limits. In June 2014, when there was a 69-hour outage for me—and at least 30 minutes for most other Facebook users—I noted I was recording a marked increase in Facebook bugs before the crash. And the even longer outage yesterday—some reports say it was 35 minutes but some media have reported it was up to 90—was also prefaced by some curious bugs that were identical to the earlier ones.
   I thought it was very odd that in all the articles I have read today about the issue, no media have been able to get a comment from Facebook. It made me wonder if people had clammed up because of what it could mean for the share price.
   And I do realize how preposterous my theory sounds, as the logical thing to ask is: how could a company the size of Facebook not be equipped to handle its growth?
   Well, how could a company the size of Facebook not be equipped to deal with time zones outside the US Pacific? And we know a company the size of Google is not equipped to deal with the false malware warnings it sends out.
   However, the geeks have reported. There are two at the Facebook developers’ status page that relate to the outage.
   If you can understand the technobabble, they are: ‘Traffic and error-rates are almost back to normal after a coordinated intervention by our engineering teams. We are now monitoring the situation and we have our best engineers determining the root-cause of this issue that affected much of our web fleet. We apologize for any inconvenience and we aim to ensure that this issue does not repeat,’ and ‘Platform has been stable for >5 hours and our engineers have reproduced the complex issue that was causing many of our www/api servers to run out of resources. The team is now working on the final fix, but we are confident that there will be no further regression. Thank you for your patience and we apologize for any issues that we caused for your apps. Have a great weekend.’
   If I understand them correctly, the second actually says that the servers ran out of resources.
   Hopefully, the above means Facebook has fixed the error, which I believe to be the same as the one in June. Facebook itself had then discounted that it was an attack.
   No wonder no one has offered the media a comment, if the site is falling over so regularly because of its bugs.

Tags: , , , , , , ,
Posted in internet, media, USA | 1 Comment »


Facebook reaches its limits again: ‘Sorry, something went wrong’

19.06.2014

Mea culpa: OK, I was wrong. Facebook got things back up in about 20 minutes for some users, who are Tweeting about it. However, as of 8.37 a.m. GMT, I am still seeing Tweeters whose Facebooks remain down.
   Looks like some people do work there after hours. What a surprise!
   However, I reckon things aren’t all well there, with two big outages in such a short space of time—and I stand behind my suspicions that Facebook has reached some sort of limit, given the increase in bug reports and the widespread nature of the outage tonight.

That didn’t last long, did it?
   Facebook returned late morning on Tuesday—as predicted, it would only be back once the folks at Facebook, Inc. got back to work at 9 a.m. on Monday and realized something had run amok.
   Now it’s Thursday night NZST, and if Twitter’s to be believed, a lot of people globally can no longer access Facebook. This is a major outage: it seems one of every few Tweets is about Facebook being down.
   Just over two days, and it’s dead again.
   Looks like I wasn’t wrong when I wondered whether that I had hit a limit on Facebook. To be out for nearly three days suggests that there was something very wrong with the databasing, and the number of people affected were increasing daily.
   And when you look at the bugs I had been filing at Get Satisfaction, there has been a marked increase of errors over the past few weeks, suggesting that there was some instability there.
   For it to have such a major failing now, after being out for some users this weekend, doesn’t surprise me. This time, groups and Messenger have been taken out, too.
   Facebook really should have taken note of the errors being reported by users.
   My experience with Vox was very similar, although there the techs couldn’t get me back online. They gave up at the end of 2009. The similarities are striking: both sites had databasing issues but only with certain users; and both sites were overrun with spammers creating fake accounts. That’s one thing that did piss me off: spammers having more privileges than a legitimate user.
   Well, we can probably wait till 9 a.m. PDT when they get back to work. It may say, ‘We’re working on getting this fixed as soon as we can,’ in the error message, but as far as I can make out from what happened to me, Facebook is a Monday–Friday, 9–5 operation, not a 24-hour, seven-day one.
   At least it died on a weekday: we can count ourselves lucky.

Tags: , , , , , , , , , ,
Posted in internet, technology, USA | No Comments »


Big doesn’t necessarily mean right

29.04.2013

Long before Google started pissing me off with its various funny acts (such as spying on users without their consent), it released a program called Google Earth. I installed it in July 2009 on my laptop, and decided to feed in 1600 Pennsylvania Avenue NW, Washington, DC 20009, just to see how it had rendered the White House. Other than various Wellington locales, that was my first search query. This was the result, confirmed by others at the time:

There’s no White House there, unless when the Google Earth people made the program, aliens had beamed up the entire block temporarily.
   Google has since fixed this. However, back in 2009, it didn’t know where the White House was. And here I was, thinking that it was an American program, where those working on it would double-check where its most famous building stood. This was four years after Google Earth was released.
   So any time people say that a big company full of techs must know more than an individual, think of this example, and some others I’ve posted over the years.
   The same lesson, I might add, applies to big countries versus small countries. Big definitely doesn’t mean right. The key for the small countries often is to outmanœuvre the large ones, by being more inventive and more innovative.
   God, I love New Zealand.

Tags: , , , , , , ,
Posted in business, humour, internet, technology, USA | 1 Comment »


The answer’s no: Google’s still in a dream world

25.04.2013

That was an interesting experiment. Although Lucire Men is still clear (for now), Google decided it would play silly buggers a few hours after we put our (clean) ad server code back on Autocade:

   But why? Here’s what Google says:

which means: we can’t find anything wrong with this site since April 8, even though our last scan was on the 23rd. Really? There has been nothing wrong for 15 days, but you’ll still block our site? (Note: Google did not block this site on the 23rd.)
   Let’s go to Google Webmaster Tools to see what it says there:

That’s right: nothing. There’s nothing wrong with the site.
   Maybe we’ve been flagged somewhere else, then? How about Stop Badware?

Nope, we’re all fine there, too.
   In fact, even Google is wrong when it says there were problems on April 8—another sign of its malware bot reading from a cache instead of fresh pages, because we say we fixed everything on April 6. Well, here’s what Google itself says about Autocade when you go into Webmaster Tools in more depth:

which correlates with the claims we have made all along: our ad server got hacked on April 6 (NZST), and we sorted it within hours that day.
   We’re interested to see if the false malware warnings can carry on for a month—after all, Google will block a blog for six months even though it says it will lift a block in 48 hours after an investigation. Things take a bit longer there than they claim. There’s a case of one gentleman who has had his site blocked by Google for two months for no reason. I’m sure many, many others are being wrongly identified by Google—and there are far too many companies relying on the Californian company’s hypocrisy in identifying malware.
   The Google belief that webmasters are wrongly claiming there to be false positives is looking more dubious by the day.

PS.: The last post at this forum entry is interesting: Google blocks a website based on stale data. The website where the malware allegedly was did not even exist, but it still triggered a warning at Google. The webmaster writes, ‘The site concerned doesn’t exist and more to the point, there is no DNS record for it either—so it cannot exist. / The IP which was once assigned to it is now assigned to someone else.’ That was in March. Judging by the articles online, Google’s been having problems with this particular bot since the beginning of 2013. The sooner they retire the program, the better, I say.—JY

Tags: , , , , , , , , , ,
Posted in internet, publishing, technology, USA | 6 Comments »


Putting back allegedly “malicious” code: has Google caught up with reality?

25.04.2013

Not a political post, sorry. This one follows up from the Google boycott earlier this month and is further proof of how the house of G gets it very, very wrong when it comes to malware warnings.
   As those who followed this case know, our ad server was hacked on April 6 but both my web development expert, Nigel Dunn, and I fixed everything within hours. However, Google continued to block any website linking to that server, including this blog—which, as it turned out, delayed my mayoral campaign announcement sufficiently for things to go out on the same day as the marriage equality bill’s final reading and Baroness Thatcher’s funeral—and any of our websites carrying advertising. Lucire was blacklisted by Google for six days despite being clean, and some of our smaller websites were even blocked for weeks for people using Chrome and Firefox.
   We insisted nothing was wrong, and services such as Stop Badware gave our sites the all-clear. Even a senior Google forum volunteer, who has experience in the malware side of things, couldn’t understand why the block had continued. There’s just no way of reaching Google people though, unless you have some inside knowledge.
   We haven’t done any more work on the ad server. We couldn’t. We know it’s clean. But we eventually relented and removed links to it, on the advice of malware expert Dr Anirban Banerjee, because he believed that Google does get it wrong. His advice: remove it, then put it back after a few days.
   The problem is, Google gets it wrong at the expense of small businesses who can’t give it sufficient bad publicity to shatter its illusory ‘Don’t be evil’ claim. It’s like the Blogger blog deletions all over again: unless you’re big enough to fight, Google won’t care.
   Last night, we decided to put back the old code—the one that Google claimed was dodgy—on to the Lucire Men website. It’s not a major website, just one that we set up more or less as an experiment. Since this code is apparently so malicious, according to Google, then it would be logical to expect that by this morning, there would be warnings all over it. Your browser would exclaim, ‘You can’t go to that site—you will be infected!’
   Guess what? Nothing of the sort has happened.
   It’s clean, just as we’ve been saying since April 6.
   And to all those “experts” who claim Google never gets it wrong, that the false positives that we netizens claim are all down to our own ignorance with computing, well, then, there’s proof that Google is fallible. Very fallible. And very harmful when it comes to small businesses who can lose a lot of revenue from false accusations. Even we had advertising contracts cancelled during that period because people prefer believing Google. One ad network pulled every single ad they had with Lucire’s online edition.
   People are exposed to its logo every day when they do a web search. And those web searches, they feel, are accurate and useful to them, reinforcing the warm fuzzies.
   Can we really expect a company that produces spyware (and ignores red-flagging its own, naturally) to be honest about reporting the existence of malware on other people’s websites? Especially when the code the hackers used on April 6 has Google’s name and links all over it?
   It can be dangerous, as this experience has illustrated, to put so much faith in the house of G. We’ll be steadily reintroducing our ad server code on to our websites. While we’re confident we’re clean, we have to wear kid gloves dealing with Google’s unpredictable manner.

Tags: , , , , , , , , ,
Posted in business, internet, media, New Zealand, publishing, technology, USA | No Comments »


Webmaster sees Google blacklist his site for two months

13.04.2013

No matter how bad you think you’ve got it, some poor bugger has it worse. One webmaster, Steven Don, has had Google claim that he has anywhere between nine and fourteen trojans on his website, but he has none. The Google Safe Browsing page claims nine trojans presently, but can’t say which domains he has supposedly infected.
   If you read through the page, like our own Nigel Dunn, he’s no amateur at this stuff.
   He has rebuilt the sites from scratch, and compared the files he has with the ones on the server, and there are no differences. Yet Google refuses to acknowledge that his site is clean after two months.
   The only things he cannot vouch for himself are the Google Analytics and Google Adsense codes, and the Google Plus One button. And that makes me wonder about Google Adsense once again.

Tags: , , , , , , , , ,
Posted in business, internet, technology, USA | 2 Comments »


Day six of the Google boycott: if The New York Times isn’t safe from blacklisting, then how can we be?

11.04.2013

It’s day six on the Google blacklist for Lucire. And no, we still don’t know what they are talking about. StopBadware doesn’t know what they are talking about. Our web guys and all our team in different parts of the world don’t know what they are talking about.
   Today, I decided to venture to the Google forums. Google forums are generally not a good place to go to, based on my experience with Blogger, but I came across a really helpful guy called Joe (a.k.a. Redleg x3), a level 12 participant, who has gone some way to redeeming them.
   I told Joe the same story. He begins writing, ‘First I think you really need an explanation from Google, I can see why your site was flagged originally but do not understand why Google did not clear it today.’
   Exactly. But what was fascinating was that when he checked through a private version of aw-snap.info, which helps you see what malware spiders see, he found the old Google Adsense code the hackers injected.
   This very code has been absent from our servers since Saturday, otherwise we would never have received the all-clear from StopBadware.org. We also don’t use a caching service any more (we used to use Cloudflare). But, if Google saw what Joe did, then it means Google’s own bot can’t load fresh files. It loads cached ones, which means it keeps red-flagging stuff that isn’t there.
   If you read between the lines of what Joe wrote, then it’s clear that Google relies on out-of-date data for its malware bot. He checked the infected site and the file that caused all the problems has gone. And we know the hacks are gone from our system. It’s totally in line with what we were told by Anirban Banerjee of Stopthehacker.com on the errors that Google makes, too. I can only conclude that it’s acceptable for Google to publish libel about your site while relying on outdated information—information that it gathered for a few hours six days ago, which has no relevance today.
   We still don’t know if things are sorted yet. We know this has been a devilishly frustrating experience, and damaging to our reputation and our finances. Yet we also know Google will just shrug its shoulders and do a Bart Simpson: ‘I didn’t do it.’ It’ll get blamed on the computer, which is terribly convenient. It’ll also blame covering up my Google Plus status criticizing them on the computer.
   It looks like we are not alone. I’ve been reading of The New York Times and The Guardian getting red-flagged. Google even decided to blacklist YouTube at one point this year (given where I think the hackers’ code comes from, I am not surprised a Google property is malicious). The difference is that the big guys are more noticeable, so Google whitelists them more quickly. Our situation actually mirrored what happened at ZDNet, except they got cleared within hours (even though we fixed our problem within hours). The little guy, the honest business person, the legitimate blogger, the independent online store-owner—we’re in for a much harsher ride.
   With Google supplying its corrupted data to other security programs like Eset as well as browsers such as Chrome and Firefox, then putting all your eggs in one basket is terribly dangerous, as we have seen. More so if that organization has no real oversight and your complaints are silenced. And as we have seen, Google will go to great lengths to preserve its advantages in the online advertising market.

Tags: , , , , , , , , , , , ,
Posted in business, internet, media, publishing, technology, USA | 1 Comment »


How Google can get it wrong: an expert on malware gives advice

09.04.2013

Frustrated with ongoing Google’s false accusations over our websites, I joined the Stop Badware community today (Badware Busters), and got some sensible advice from a Dr Anirban Banerjee of www.stopthehacker.com.
   He had checked what Google was on about, and noted that it was still making the same accusations it did on Saturday—when we know that we had already removed the hack that day.
   I told him this, and he replied:

One policy that a customer followed since Google was just not letting them off the blacklist inspite of cleaning the server, DB, etc.. was to “suspend/remove” all ad code pointing to the mother pipe (your main server in your case) – get the request for reviews pushed in asap, get the sites off the blacklist (since Google did not see any openx ads, nothing to analyze, hence the sites were let off within 5 hours) – then put the ads back again.
   They used a simple grep command to strip out the ad code, and then restored the pages and code from a relatively fresh backup once the blockages were lifted.
   I know this is kind of hack-ish – but sometimes inspite of all the meticulous cleaning that people do – automated system will flag sites.

   In other words, Google can cock up. This time, it did. So you basically need to fool Google, get your site off the blacklist, and put things back to normal afterwards.
   Or: there may be a drunk driver swerving left and right at the wheel of the Google truck, so it’s your job to make sure that you build a nice road in front for them, rather than insist that they clean up their act and stay on the road.
   Mind you, the last time Google claimed to analyse something in two days, it took six months—here’s hoping we’re back online before then. It’s getting embarrassing telling clients what had happened, especially as most drink the Google Kool-Aid and believe the firm can do no wrong. Peel back only one layer, and you can see plenty that goes wrong.
   It’s not fair, but what can you do against the Google juggernaut when so many people rely on it, especially Chrome users who are getting the false red flags more than anyone else?

Tags: , , , , , ,
Posted in business, internet, publishing, technology, USA | 4 Comments »


Facebook says 2007–9 privacy breach is ‘false’—so why are so many affected?

30.09.2012

Facebook received this bug report from me today (the ‘Sincerely,’ etc. at the end have been omitted).

Hi guys:

I know you’ve said that the bug reported in the media about private messages going on to walls between 2007 and 2009 cannot be confirmed, but it has happened.
   Back in those days, with ‘User Name is’, I wrote in the third person. Yet I can find these allegedly public posts in the first person on others’ walls.
   Before you introduced private message threading, people often took excerpts from a previous message in their replies. I can see those, too.
   Your investigations will have shown that these messages cannot be found in users’ PMs. They will also have shown that they were public to begin with. I can confirm that with a full data download in October 2011, I saw exactly the same thing as you.
   This leads me to believe that some of these PMs were incorrectly classified at some stage, leading to their recent publication.
   I even know of a case where a contractual dispute done in DMs was published.
   After Timeline was introduced in September 2011, I spent a lot of time looking at previous years, because I was fascinated about how you did the annual summaries of the most significant posts (and the most significant new friendships). I distinctly remember that the number of messages on our walls increased per annum. Right now, the sequence decreases between 2007 and 2011, beginning with 786 messages in 2007. I know for a fact that that number was not 786 when Timeline was first introduced and I have a photographic memory.
   Please don’t dismiss users and say that we don’t know the difference between DMs and wall posts. Most of us do, and there are many signs that these messages are private—maybe not in the way you have categorized them now, but certainly in the way they once were categorized and in the context and manner of those messages.

I’d urge everyone to check their Facebooks. While I thought the first reports about this were hoaxes (and Snopes continues to report that they are, and the US mainstream media have taken Facebook’s side), I’ve taken a look at my Facebook and the structure of some of these “public” 2007–9 messages are akin to private ones. Better yet, check your own and see if your private messages have been broadcast.

PS.: At one netizen’s suggestion, I looked back through my 2007 notifications and can confirm what Facebook says—at least for messages before August 1, 2007 (the day I turned off wall post notifications). Every notification correlates with a wall post or a wall-to-wall. I’m still convinced the annual summary that year showed far fewer than 786, so my only conclusion there is that Facebook must have not shown a lot of the wall-to-walls.—JY

Tags: , , , , ,
Posted in internet, media, New Zealand, USA | No Comments »