Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Ridiculous Coding Practices by companies you have worked for
68 points by dannyr on June 4, 2009 | hide | past | favorite | 134 comments
I once worked for a company that did not allow us to use object-oriented programming. This is because their 2 most senior programmers did not know OOP.

Other things include putting a lot of business logic in stored procedures (SQL) so that migration is easier.

We were also not allowed to use folders, all web pages should be on the root directory. I can't remember the reasoning for this.



Version control done wrong:

Version was code written onto a CD-R

Control was a person who got a full 60K Salary for monitoring a room with a bunch of file folders with different versions of the software. When you made a commit you burned a new CD and brought to Tim on the 5th Floor.

You can't make this stuff up.


I thought I had the worst version control story but you won by a large margin. None the less, here's how version control worked at a place where I was contracting:

They used VSS for version control and in team of around 30 people only two folks had check in access. So how do I check in my changes? When you are ready to check in your files, you fill in a 'blue sheet' (a blue paper with form printed on it) by hand. You are supposed to write name, location on your computer for that file, VSS location of the file and comment for each file which you want to check in (by hand on blue sheet). I used to take one day to implement a change request but would take as much as two weeks just to check in the damn files as those gate keepers were always busy. And god forbid if someone else checked in the file which you intended to check in. The gatekeeper will send you back to reconcile the changes and you will be again at the end of the queue. Trust me, I used to get 'blue sheet' nightmares!

When I asked them why they have this weird policy, I was told that it was to prevent people from checking in the code with breaking changes and enforce code review. Mind you, while I was there, gatekeeper didn't review my code even once. Reminds me of the monkeys and banana story.


You thought you have the worst version control story? What's version control?! Seriously. All edits made locally and "tested" by direct upload to the production server. Different "commits" tested by file renaming. file.asp.old, file.asp.old.xx, file.asp.5.25.2002

"Hey Tony, are you working on blahblah.asp? Oh, no?, Great, let me just pull a copy, don't work on it until I say I'm done OK?"

I was fortunate enough at the time (circa 2002) shortly after my arrival to put svn into place. I gotta say though, once you now how to do something the completely wrong way, it's much easier to do it correctly. Ugh.


I thought for a second you were describing the place I used to work at. Only it was PHP, and the database was a 'free' version of a commercial database which was already several years out of date when I joined...<shudders/>


Believe it or not, this is still better than ClearCase.


I'm stuck with ClearCase at my current job.

Before making each change, my team's 'architect' (almost hard to say without laughing/crying) copies her entire checkout folder to a backup folder she has created... painstakingly numbering each backup by date and time. Her explanation: this way she'll be able to revert anything that goes wrong.

So yeah, not real sure why we bother with ClearCase when we have team members implementing ad hoc, informally-specified, bug-ridden, slow implementations of half of Git.


Until you said 5th floor, I thought I must have worked with you! Our Tim was on the second floor.


My Tim was on the same floor. And his name was Mat.


I've worked at a place with CD-R versioning scheme but without the Tim equivalent.

Although the backups were pretty important because the founder was paranoid and insisted that we put the hard drives and swap space of every machine through 25 runs of shred at the end of each day.


Backups were zip files of the source listed by date. They were kept on the CTO's laptop because he was the only one allowed to see the source, everybody else just got the function they were working on.

The code was written in C++ but the CTO was a Fortran programmer! So no inheritance, no function overloads, no private data. And no malloc/new - there were several global buffers of different sizes statically allocated at startup and you used these, managing your own pointer offsets


I once had a boss that forbade regular expressions, including split().

The task was parsing datasets in a variety of formats (one per vendor, hundreds of vendors.)

The only acceptable way was substr. Everything else causes "memory fragmentation," apparently.

The language was Perl.


The only thing better than ridiculous coding standards are the fanciful explanations for them.


Yeah, Perl shops can be weird. I was told not to use map or grep because they are "too hard to understand". How?!?


Sadly, it was not a Perl shop. I was one of only a few developers.

My next job was at Morgan Stanley, which had great infrastructure, both for Perl and otherwise.


I've been discouraged from putting code in reusable modules because it would mean one central location for everything to break. The rationale was that if every routine was copied and pasted where it was needed then introducing an error in one place would not affect other places.

I am leaving a company that uses a certain lightweight language to build web apps using CGI. There is no way to load libraries except by recompiling the runtime, and we do not have the C source code for that runtime in our possession. This means every library out there (even for the particular language we use, thanks to the fact that we're using a 9-year-old version of the language, and the inability to load binary libraries) is NIH'ed out from the start! Interfacing with external tools is broken because the function that runs system commands has been changed to be async with no access to the standard streams of the process it creates.

The product that is built on top of this is designed to let people build applications using flowcharts with a visual editor... but the "programming" that is the result of this system is essentially spaghetti code with GOTO, numbered labels, and a single global scope. Nodes in these flowcharts are labelled with 100, 110, 200, etc. by default and are laid out automatically, so there's no way to understand a routine at a glance. Variables in a flowchart get clobbered by other "subroutines" when they are called. Everything is indirections on indirections on indirections. I could go on... but you can get the idea by looking at this: http://imgur.com/ZKCU.png

This is all seen as advanced next-generation modeling that provides massive productivity gains over standard programming practices.

At one job I introduced SVN and Tortoise to try to bring version control to the project. A few weeks later we were using Visual Source Safe because the contractor goons couldn't grasp anything but VSS's Visual Studio integration. They also moved to doing development on one shared drive so that updating the whole tree could clobber someone's changes that weren't checked in.

I was also instructed to not use document.getElementById(), since the IE de-facto "standard" was to just refer to the element by its name (or whatever was going on back then).


> I've been discouraged from putting code in reusable modules because it would mean one central location for everything to break. The rationale was that if every routine was copied and pasted where it was needed then introducing an error in one place would not affect other places.

Ha, I've run into this one as well. Blew my mind, they actually thought of this as smarter because it avoids any single point of failure in the code, like a battleship where there's a spare everything. Some people are just too stupid to be coding.


I actually once worked for a large insurance software company. They had a desktop workstation application that was entirely written in C. It consisted of about 2000 DLLs, which were used to get a kind of primitive polymorphism. Most of the money came from "customizing" the app for clients, and most of the labor for that came from local community colleges, where they had an intro to C programming class where they implemented a doubly-linked list. As a result, these "local talent" programmers would only use doubly-linked lists for collections, and would re-implement it every time they needed a new collection! As a result, every customized app had about 500 re-implementations of a doubly-linked list! (There were more collections than that in the entire app, but the core was written by competent programmers.) And yes, the same mistakes were made each time, and it was re-debugged every time.

I was there on a project where the client's IT department had implemented a universal library for doubly-linked lists. None of my fellow programmers understood how it worked (clever use of macros to get pointer offsets) and management would only use it for that particular client.

(They charged hourly for the "customizations.")


I interned for a health care company where the lady in charge of my work had a Microsoft Access database with a single table. That table had something like 500 columns. I spent a week or so understanding and created a decent normalized relational database model. I was not allowed to make any changes. The reason given was 'You will leave, who would understand and maintain all these tables'


The reason given was 'You will leave, who would understand and maintain all these tables'

Well, that's a good point. Fixing the database is great, but there has to be someone who understands the system after the intern leaves. At that point you need to figure out who would maintain after you leave and get them to be the champion of the new system.


I'm apparently too opinionated to survive long in insane environments, but I did intern for a company that, in Java, did a couple of really disturbing things:

1. Every Java exception that can possibly be thrown from a method must be caught, logged, appended to a new Java exception, and re-thrown. Fully 2/3rds of the code base was this sort of make-work.

2. They followed the traditional naïve model of distributing a system: break it into functional units, and put those on different machines. Win! Well, except that now every single query needs to talk to six machines, all of which need to be up.

3. Their system, running RMI, was of course implicitly multithreaded, because that's how RMI works. I didn't see in the codebase any real attention paid to this, so I asked about it and the project lead said "the two guys who know threading aren't in at the moment, but we'll schedule a meeting Tuesday when they make it in" on a team of thirty. Needless to say, the resulting system was a complete mess...


I haven't programmed Java besides some tiny things so I'm curious about #1. Suppose someone wanted to log all exceptions. How would you go about it in Java?


Well, you certainly don't need to log and rethrow them at every call site, because Java exceptions contain a stack trace. Sometimes rethrowing is necessary, because of Java checked exceptions. Sometimes exceptions are benign. The best rule I've seen is the follows: never catch an exception if you can't handle it (either by dying, wrapping it to work around Java lameness, etc.).


Logging all unhandled exceptions can be an additional measure for ensuring quality. You have coded defensively, you have done a careful testing but it still crashes. It happens.

I do agree with your point that logging exceptions that might be handled upwards the call stack is silly and probably makes the whole log useless.


Your point 2 is far from ridiculous. We do it that way so the business logic exists and is enforced in one place rather than being scattered around this desktop app and that website and so on. Only the most trivial applications can afford to put all their business logic in the middle tier.

Also I'm not allowed to use OCaml in production systems because too few people at the company speak it. Annoying for me, but again far from ridiculous.


Point 2 is a bad idea. SQL stored procedures are not a good way to implement business logic. Of course, people who hold the opinion that OO is a bad idea would think it's a great way -- it's pretty much the opposite of OO.

We do it that way so the business logic exists and is enforced in one place...Only the most trivial applications can afford to put all their business logic in the middle tier.

There are much better ways of enforcing business logic and controlling it all in one place, which would be compatible with configuration management/source control and much more convenient for most programming teams.

(Namely: just publish the business logic as a library and have standards that dictate its use. These can be enforced using automated scripts and a little process. You are also not restricted to the middle tier, though I'd still say it's a good idea.)


just publish the business logic as a library and have standards that dictate its use

LOL!! Why not put the business processes in the standards too, then you won't need any code!

The point is, no-one running any apps against your database has a choice but to respect the business processes if it's all implemented in stored procs. You are protected from malice, from incompetence, from simple human error, and your auditors and regulators and shareholders are happy. Maybe no-one cares if it's only a website.

Also I am hilariously amused that you don't think stored procedures can live in version control like any other source code.


Stored procedures don't seem like a bad approach to me. You can provide versioning support by doing something much like Rails migrations. Providing access to the database through stored procedures basically is publishing business logic as a library. I'll admit pl/pgsql looks a little funky, but it doesn't seem too bad.

I'll note I'm not talking from experience here, I'm just speculating and hoping for somebody with more practical experience in the matter (or better insight) to enlighten me.


If it is "basically...publishing business logic as a library" then why incur the expense of using such a scheme? Just publish the darn library and put it on a read-only shared directory. Why give up the use of purpose built version control and programming tools? Why give up the modularity and increased productivity that a high level language can give you?

Most of the time, it's because the organization thinks in terms of SQL and relational tables. There are better ways to do business logic.


As a language, it's pretty painful to use and you run into those limitations pretty quickly. You also put more processing in the database, which isn't great for scalability. It is nice being able to test changes via the sql prompt though. A library + console app might be a good compromise.


Your point 2 is far from ridiculous. We do it that way so the business logic exists and is enforced in one place rather than being scattered around this desktop app and that website and so on. Only the most trivial applications can afford to put all their business logic in the middle tier.

But of course, the middle layer that applications talk to need not be the database. A simple RPC server between the apps and the database makes maintaining the system significantly easier. Your procedures are "normal code" that you already have a toolchain in place for dealing with. (Testing, libraries, version control, etc.)

The environment is richer, which means the middle tier can handle caching, sharding, replication, auditing, and anything else that you might need to add. And, you can add these things incrementally, without touching the apps that speak to this server.

IMHO, stored procedures are a quick hack that are barely viable for even the simplest tasks. Many people seem to disagree, but I doubt they have ever considered any other way.

(A notable example of this system is Flickr. Their internal apps, like the public web interface, use the Flickr API to do their work. In this case, the web app that powers the API is the "middle layer"/"RPC server". I think this worked out pretty well for them, and it made it easy to expose their database to third parties.)


this was common practice about 15 years ago, before people discovered that putting biz logic in stored procs really sucked for UI interaction, sucked for maintainability (stored proc languages really suck), and truly sucked for scalability.


It sucks yes, but working at a place where there are a groups of people that commonly access the database to do operations that cannot be done from the front end you definitely want procedures. They use different tools for a variety of tasks and when the business logic isn't in procedures its very easy for them to break things and they have and do.... Our DBA however wants all procedures for all access which I find to also be an extreme example for the reasons you mention as not all data needs this level of control as its just simple CRUD operations and tables they should never change anyways.


Well then in this case you have a people problem, not a software problem.


The alternatives are not

1) Sproc 2) Scattered around the app haphazardly

The only apps I've seen use the BLL-as-an-sproc trick were written by people without any experience writing scalable code at a real company.


Really? Most of my experience is on trading systems handling thousands of real-money transactions per second, and stored procedures are the only proven approach.


SAP has all it's business logic in the middle tier, and I wouldn't call it trivial.


My dad can beat up your dad.


Promote code that's indented 3 spaces instead of 2: rejected by QA.

Promote code that deletes the entire production Customer Database if the user hits "Esc" on Form 11 by accident: no problem. Unless it's indented 3 spaces instead of 2. Then it's rejected.


>> Promote code that's indented 3 spaces instead of 2: rejected by QA.

I understand that QA rejected it if the coding style stated that 2 spaces should be used for identation.


> I understand that QA rejected it if the coding style stated that 2 spaces should be used for identation.

I reject your comment because you used two >> to denote a "quote" instead of a single >.


The difference is that Hacker News does not have Commenting Standards that state number of > to use for quoting, unlike his company which probably had Coding Standards stating number of whitespaces to use. Working with code written by 3 people if one of them used 4 whitsepaces, other used 2 and 3rd used tabs is just terrible.


A company I was at created huge arrays of data structures, many of which contained nested arrays of data structures, which were eventually serialized and rpc'd to another server. Since this was written in C, and they felt malloc was too hard to do correctly, every structure was stack allocated, using a MAX_SIZE number of elements per array, all memset'd to zero.

It didn't take long til everything started crashing from buffer overflows, so they just increased each MAX_SIZE definition to some ridiculously huge value. This made some of the top level data structures hundreds of megabytes. Developers complained that memset would take multiple seconds to run, and so they suggested just removing the memset calls to make the code "more efficient". Another developer suggested replacing all calls to "abs" with "if (n < 0) { n = -n }" because it would also be faster. I quit soon after that.


We worked on a web application. The web-code (asp) and business layer (vb) were under source control. We used visual source-safe, which was ridiculous enough (though not my main gripe).

My main gripe is that while those two parts of the project were under source control, the database definition was not. So, while we could revert back to a previous version of the code, there was no way to revert back to the database schema that the code relied on. We were able to autogenerate database creation scripts (which we would use for deployments) but management would never let us check those script in under source control (which would have solved a lot of problems). The manager was a guy who liked to let wizards auto-generate code for him, and the idea of maintaining a sql-script that contained several thousand lines of code was just too much for his brain to handle.


I think the kind of place where managers dictate what developers can check in is a place you should run away from


I did!


Our company spent US$50K on an Oracle database and my boss forbade me from using triggers, because he wanted the application to be database independent. Never mind that we used Oracle syntax in our queries.


Also: not allowed to use Oracle's native date data type, and no foreign key constraints allowed!


How often does one actually switch between databases? I can see that yeah, some projects have a very slight change of migrating from mysql or postgres to oracle if they are really successful. But for many projects, the time and pain spent on using a database abstraction layer, or even worse building one is a total waste of time. Not that I mind using a well written ORM like active record, the benefits from that doesn't come mostly from that it's database agnostic.


When you have many clients each with a diff database. All other times the argument is moot.


Extensive use of Hungarian Notation ("i" before integers, "sz" before C-strings, and the kicker: "ob" before objects).

No use of STL, using a hand-rolled dynamic array instead that grew by a constant size instead of doubling.

Extensive amounts of application logic written right in the GUI builder's generated event handlers, for the company's very large flagship product.


That's "systems" Hungarian, developed by the one and only Microsoft. Putting the variable type inside the name is redundant, I think everyone can agree.

"Apps" Hungarian, on the other hand, is a wonderful system where the prefix describes the purpose of the variable (such as us for "usafe string" and ss for "safe string"). It also describes function naming convention (ss_from_us for "Safe string from unsafe string") so that you can easily tell if you're making a coding mistake. Compare:

  $ss_username = ss_from_us($us_username_field);

  $ss_username = $us_username_field;
Real easy to spot the mistakes.


also developed by the one and only microsoft.


Hungarian notation makes sense if (a) you have to declare all your variables at the top of the function, so you can't see the variable types, and (b) you don't have a code browser that gives you type information when you hover over a symbol. More and more, neither of those are true, so I find Hungarian fairly useless.


We used systems Hungarian notation for LotusScript because older versions of IBM Lotus Domino Designer had limitation (b) along with some other problems. It was actually helpful in understanding and maintaining complex code. But we always understood that we were doing it to work around editor flaws rather than because systems Hungarian notation was a good idea in general.


Even in C89, you can declare variables at the start of a block, not just at the top of a function.


http://www.joelonsoftware.com/articles/Wrong.html

Great article on Hungarian notation, why it used to be good, and how things went horribly awry. Takes a while to get to the point, but he's convincing.


so you know what (who ?) "paul" is : pointer-to-a-unsigned-long.

sorry, couldn't resist


I'm sorry I have to extend this to hardware/environment issues. Four tech people in company (including a web designer). One guy managing the network remotely from some other office location. Only access to production servers off-site was through VPN, common enough right? Fine, but instead of figuring out how VPN passthrough/NAT/ports worked on the Cisco sitting in the closet, the sysadmin settled on getting everyone who needed access to the server their own static, external IP addresses. Not only were they paying extra for these on a monthly basis, but we had these poor early XP boxes sitting on direct connections to the internets. Firewalls are for jerks, eh? Oh, and to make matters even funnier, when you were connected to the VPN, you couldn't get email or go on the internet, because no one had ever figured out that the "use default gateway" option maybe meant something.


We were explicitly barred from using tools that were not published by Microsoft. Even if Microsoft didn't make a tool to solve the problem. Instead, we were encouraged to create our own, on our own time, outside of regular business hours.

The same place barred the use of code generators, the use of snippet macros, and they barred admins from creating automated scripts/utilities to automate gruntwork. They likened it to taking a math test without a calculator -- you should be able to do it by hand if you can supposedly do the job.

The same place also required that comments were not to be made in source code files. They should only be made in source control check-in comments.

The same place felt refactoring was an unproductive activity unless it corrected a specific "bug". Scalability and performance issues weren't considered valid bugs - it was fast before, so why would it ever be slow now? Blame would be put on the hardware until network ops bit back. Then you would be lambasted for your application not scaling 4 orders of magnitude on the same hardware when that wasn't part of the original requirements.

One place I worked for did not allow developers to use the bug-tracking system. Instead, someone from QA would place printout of open bugs in a bin. You were required to take the bug from the bin, back to your desk, and when completed, fill in a form by hand detailing the explanation and attach the appropriate colored paper strips in the right order so it could be triaged appropriately.

The same place migrated customer data to their product by hand. Using Excel. We jokingly referred to this process as "electronic handcrafting". The reason was because it was faster and resulted in fewer reported defects after deployment. Someone in that department got fed up, so she had me create ETL processes for the four most common competitor products. Time-to-install went from 4 weeks to 4 days and the error rate dropped 98%. It won over the COO and some staff, but made me enemies with plenty of others.

At the same place a sales manager convinced the COO that RAID1 should be considered a viable backup and disaster recovery solution. We were tasked with making it so RAID controllers would not back up certain types of files.

Another place I worked for as a contractor insisted that their web app was written in ColdFusion. Despite not having any other contractors past or present, and the market not having any ColdFusion developers. The reason ended up being is it because it "sounded cool". Needless to say, they have an app still running on ColdFusion 4.

I've worked in too many places that are adamant that "backups" are all the database version control they need.

I've worked at a place that would expect me to develop APIs for certain products and systems we had, yet refused to allow for a requirements gathering phase. I was actually fired trying to explain that I can't build something when I don't know what I was building. Reason was so the new CIO could bring on a contracting firm run by a friend of his. Three other people have since quit in response.


How did you find so many bad jobs (they sound horrendous)? More importantly, why did you take them, let alone keep them?


Inexperience, and a tendency to be a "saver". I've since learned how to read between the lines, ask a lot of questions hiring managers aren't expecting to be asked, and to request interviews with team members if it isn't offered (and refusing the position if my request is refused).

Additionally, you'll hear enough horror stories if you talk to those who work in IT departments in regular companies that aren't doing anything interesting because no one's been able to present a solid business case as to why IT can add value or affect the bottom line. Though sometimes the works is simply gummed up by political battles and everyone's afraid to go to HR or to make an attempt to knock down those walls in fear of losing their job. What you read about on HN is what you'll see at the right-end of the bell curve, and often isn't representative of your average company.

Most of these issues were driven by politics. In a lot of situations (electronic hand-crafting, manual bug-tracking) I was able to drive change from within. Granted, they took months. However people were afraid of opposing the head of QA because A) she was a lifelong friend of the founder's life B) she was predisposed to think all developers were idiots C) they weren't equipped to traverse that delicate political minefield without at least losing a leg in the process.

The scalability/performance issue one? This was at a private sales-oriented company with a billion dollars in annual revenue. Again, mired in politics. Their entire e-commerce system was built by guys who were completely green at the time, and they hacked it together. He was best buds with the CIO, and was... a pretty passionate guy, and very defensive. The average person with a spouse and kids to support didn't feel it was worth risking their job to improve the architecture if they couldn't present a decent argument and it came across as hurting his feelings. Me? I spent far too many lunches talking with developers, guys in network ops and analysts to come up with ways to cache data and partition functionality so the thing could handle the existing load and scale in the future. It took buy-in from stubborn developers who normally pointed the fingers at the database or the servers, network ops guys who pointed fingers at the developers, and a testy data architect who had a serious row with the development manager.

I've learned a lot from those experiences, especially the social problems that can exist in team-based development, and how to present solutions where even the most stubborn-minded individual can "get it". I've also learned that these problems exist nearly everywhere, just more so in some environments than others, and when you can solve these problems and when it's instead best to learn and move on.


BTW, your profile lacks contact information despite you asking to be contacted. :-)


This should be corrected (thought I updated it a while back, info is certainly there). If not, send a tweet to @robgomes.


It seems like the general solution to these problems is "teach the boss that their friend may just suck."


> "However people were afraid of opposing the head of QA because A) she was a lifelong friend of the founder's life"

You had a biographer leading QA?

Sorry, couldn't resist.


life = wife


Man, that makes it even harder to affect change because she literally has the guy by the balls.


I was barred from using recursion because it would confuse people. My response was we should fire these people. Authority won.


Barring people from using recursion is certainly bad, but firing people who don't understand your recursive code is probably worse.


We're limited to what we can do/use by the technical depth/breadth of the 'architects' (mid-level devs at best) that we have. Pretty much limits us to java 1.4 and earlier and libs that have been open source for at least a decade.

I'd written some script in ruby to help us automate some maven grunt work and they said they'd rather spend 6 hours doing it manually than run ruby as they hadn't vetted ruby.


Quietly do your work in Groovy then...


that's kind of what I do: since I work at about 25% capacity most of the time I'm learning the inner guts and glory of ruby for projects outside of work.


Our entire system was written in ColdFusion. I was asked to add a feature. That feature was to be embedded in the ColdFusion system within the nested includes and infinite scope of tag mush.

It had to be written in PHP. Why? Because we were moving to PHP.

So I wrote it in ColdFusion and used PHP REST webservice to get the data.

I still to this day do not understand why that system is now part ColdFusion, PHP, Flex, and Perl. :)

Great company though, excellent people to work with.


It's called legacy it happens companies have been working on a code base for more than couple of years, and the current trends change.


I was once banned from using ternary operators, that was weird


have you ever seen them abused ? you'd understand why : (or at least have you thought about it ? you'd understand why : (or at least you can get the gist from this ? good : I can't help you))


That was perfectly clear to me... :D


I think what he meant was more something like this:

  $row[6] = $src?str_replace($src,($table[$src]?$table[$src]:$src),$row[6]):$row[6];
[Edit] It's not quite so clear when it's not in English, methinks.


Same here :)


Actually, this is even better than a normal post since I can terminate early without reading the rest ;-)


I guess it all depends:

  seen them in tabular format ? you know they're great :
  have you seen them abused   ? you think you know     :
  have you thought about it   ? you think you know     :
  get the gist from this      ? good                   :
                                I can't help you       ;


That's quite a bit more readable than the equivalent tower of ifs and elses.


Would this be like (cond) in lisp?

  (cond
    ((seen-them in-tabular-format)  '(you know they are great))
    ...)


You can use the ternary operator without abusing it.

That said, that made complete sense.


have you ever seen them abused ? you'd understand why

: (or at least have you thought about it ? you'd understand why

: (or at least you can get the gist from this ? good

: I can't help you))

Sorry, not perfect but with some indentation would do the job quite nice.

Sometimes is not what you write, it is what you don't ;)


also, aren't the parens redundant?


My post below:


    <?="I","smarty pants"?"don'tget":"see";?>it.


In a similar vein:

Have you read the Drupal source ? Exactly : Take a look


I was working for an airline website - which was weird, because I had been promised I would be working on something else. I bit my tongue and agreed to it, since it sounded like fun.

Then the Chairman of the company, a fellow who supposedly had a law & math degree, told me that he would be doing the design with me (???). I sat down as he started writing buzzwords on the whiteboard. I jotted down a bunch of nonsensical words like "powersearch", "hypersearch", "megametasearch", and tried to write down what each meant.

Once the buzzwords were out of the way, it was onto the real meat and bones - the design!... except, in his own words, "At XYZ, we design the database FIRST and then design the program on top of that". Think about it. No objects, no user story, no nothing - just "DESIGN THE DATABASE". So I sat slackjawed as he started listing stuff like "Oh, we can use an enum for the airline companies... hmm, I wonder if int is too big for the id, maybe we should use smallint?". I tried to steer him towards the more fundamental questions of "What exactly do we want?", but he was having none of it.

I spent 4 months working on that software, and 2 more months wasted in other ridiculous projects he had before I tendered my resignation.


It seems to be a common problem that, although the will is there to improve practices, persuading higher-ups is almost impossible.

Although many of like to think that cold, hard logic is the only suitable method of reasoning with somebody, the reality is that even programmers are somewhat human. We're all proud, emotional souls.

If, like me, you've spent half a lifetime on your own, hacking and tinkering with computers, your skills in handling people may be somewhat underdeveloped. I've recognised this in myself and have been trying to do something about it. I can thoroughly recommend Dale Carnegie's classic book:

http://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influenc...

In most cases, it should be possible to win over the stubborn manager or sensitive programmer who's blocking any progress towards sane development practices. There's plenty of material in the Carnegie book suggesting how one can gently persuade people, i.e. how to win arguments. In hindsight, a lot of the book's advice seems obvious and corresponds with my own experiences, but I needed somebody to make it all explicit.

I have first hand experience of having to placate a sensitive, confidence-lacking programmer. A few times it's been necessary to reject his ideas or code, and he's taken it very personally, being unable to separate in his mind an 'attack' on his work from an attack on him. Once you know how, it can be fairly easy to reassure him.

I've never had the problem of a stubborn manager who thinks he knows better, but I have a good idea of how I'd approach him and try and persuade him of the right way to do things. Then again, I still feel like my people skills are great only in theory, and somewhat lacking in practice.


I agree, but ideally you're goal should be to avoid these environments in the first place.


I had to set lots of `visible?', `touch sensitive?', etc., flags based on Boolean expressions. I ended up with things like

    foo->vis = a || (b || !c && (...)) && z;
Coding standards, and the arbiters at code review, insisted on a nested set of if-then-else statements comparing, e.g. a == TRUE, rather than just `a', with the many blocks being either foo->vis = TRUE or FALSE as appropriate.

This resulted in my two or three line statements turning into many more lines, which then gave them cause for concern because their style had swelled functions to where they thought they were getting too large!

Plus, the C compiler, one of IBM's, produced better POWER code for my line than theirs.

I finally got them to back down when I gave them a code sample in their style for study. Even when I pointed out it had an error they couldn't find it. I'd deliberately missed an else-block off an if-statement that had started a dozen lines before. This would mean foo->vis sometimes wasn't written to at all; impossible with my version.


We cannot download a single piece of software that is not approved by the 'Open Source Council' - a group of reactive retards who if they ever did their job would already have a tool ready for all the situations they have nothing but bemused confusion to offer for the advice they are purported to give to us. I work for a crappy insurance company. My coding life 9-5 sucks.


I remember a couple jobs ago, back in 2004 we were stuck using JDK 1.1.8 because the IDE we used didn't support anything better, like 1.4.2. The IDE was MS Visual J++ so you can imagine my pain.


circa 2002, if you only use jdk 1.1.8 ,I think MS visual J++ is one of the best Java IDE.


I've used source safe, which is pretty much the most brain damaged way to do source control ever invented. "Could you unlock that file please?"... Thank god for svn and git.


Does a ridiculous lack of any coding standards whatsoever count?


Isn't that more the rule than the exception?


I was paid over six-figures to share a computer with another programmer all day long even though studies have shown that to halve productivity.


i sense a troll


The last two points seem to be pointed towards architecture decisions, not necessarily coding practices. I see coding practices more as language selection, indentation styles, the forced use of asserts or null checks, etc. ... Things that say how you write, manage or release code.

As for the very last point, maybe someone had some data to show that sub-domains increased response time. Alternatively, maybe subdirectories inadvertently introduced less code re-use at the company. Who knows. In business, sometimes there are stupid rules simply because someone thought it was cool. You either abide by it or bring forth some data to show why the rule is wrong (or could be better).


Rumor has it theres a coding standards manual floating around at IBM that forbids the use of pointers. In c++. Yes indeed.


No problem

  template <typename T>
  class NotAPointer {
    T* m_pointer;
    T* set(T* p) { return m_pointer = p; }
    T* get() { return m_pointer; }
    NotAPointer(T* p) { set(p); }
  }
Just outsource that and you're pointer free!


I see how you've cleverly perpetuated the policy by forgetting to make the methods public.


As it was the first C++ I've written in nearly a decade, I'm surprised I remembered as much as I did. I should have also made the set parameter a reference to better obscure the involvement of pointers.


maybe they always use scoped_ptr / linked_ptr?


About ten years ago, a telecom company (which no longer exists since it was purchased by a larger company) I worked for refused to allow us to use anything open source.


Sometimes that makes a lot of sense. The GPL's viral natural can cause serious problems. for a business Everyone at Microsoft is banned from even looking at GPL'd code.


Yes, and Microsoft's attitude to Open Source and Free Software is well known for its rationality and basis in fact.


I'm not talking about Microsoft's general approach to conducting business over the last 30 years.

Microsoft (and many other large software companies) prohibits their programmers from viewing GPL'd source code. This policy is based on the shark-infested waters of our litigious society.

This is Microsoft's open source division: http://port25.technet.com/

They have to be kept away from the proprietary intellectual property. It's like separating your dairy and your meat when keeping Kosher. Microsoft has to separate tainted developers from untainted ones.


well it doesn't make sense when we don't modify the GPL projects and when we don't distribute our code


Do you distribute the binary?


nope - it's just a server side app for internal use


Where I once worked, the -> operator was forbidden to appear in the C code we wrote. In place of w->x or f->title you had to write stuff like WIN(w,x) or FRM(f,title). WIN, FRM and the like were just C preprocessor macros. This was all to allow for some kind of weird future structure compatibility manageable by tricky macro redefinitions but I was never quite convinced.


I worked at a place that for the most part banned vowels from database column names.

I hope you feel better knowing that you probably bank there ;)


Our HTML dev team builds the HTML/CSS so that we can complete our UI specs, then when we hand off the final HTML/CSS and UI spec to our big money development team, they rebuild it from PSDs. They use the same CSS structure framework, so there's no good reason, but we basically pay for the same work twice. If we send code, they throw it out.


Don't take this the wrong way, but how would you rate your team's front-end code? And compared to what the other guys turn out?

I've rebuilt many a template when the code given to me is in bad shape.


At an internship I had back in university, I worked for a tiny startup where the CEO imposed the following restrictions:

* No STL ("too slow", and other reasons) -- had to roll our own giant, buggy core library

* No member variables ("compilation too slow") -- had to put everything in a predeclared struct, have a pointer to it, new+delete every ctor/dtor

* Wasn't allowed to use "configure" -- instead, we'd be forced to run ./configure, and commit the results into CVS; our 64-bit build didn't work so well

* Everything must be written in C++ -- even stuff that's usually written using simple shell scripts ("makes everything easier to understand")

* On Windows, we had to use Visual Studio 5.0 -- couldn't use any parts of C++ not supported by that compiler

* Everything was written in-house -- including 4 networking libraries. Three of them by the same person.

The list goes on. *Hits the bottle


Excel spreadsheet for version control


80 characters per line limit....ridiculous when we live in the age of cheap widescreen monitors.


I can understand that having a strict hard limit is pretty stupid - having to split a printf format string just makes grepping harder. But really, long lines do make code harder to read. 80 is the default width of basically all text editors, it just makes sense that if you are going to limit it, it should be 80.

Would you really ok a code review where every line (hell even one or two lines) took up 200+ chars?


Do you really only look at one file at once? My laptop's display is ~1600 pixels wide; that's enough for 3 terminals, all viewing different files, if I keep them at 80 columns wide. If my terminals are wider, I can only see two files at once. When I'm using multiple displays, I'll generally have consoles in one display and a browser in the other, or six consoles (my brain is small, so I need lots of files open to keep track of how things work). 80 columns is just a nice width for reading things; it's about what a book uses, and it's what most people are used to. Why not stick with it?


It's a religious debate but it's always nice to have more horizontal real estate, for file browsers, help, scripts, another source listing, or whatever else.


In a Java environment, in late 2007, we began work on a new application. I was told not to use Spring and that the application should be developed in Struts 1.

Thank god I left after a year.

Going forward, when you interview, you should ask about development environment, version control, testing practices, etc. You should ask to talk with some of your future coworkers.

If an environment is toxic, riddled with politics, poor engineers and bad coding practices, it's admirable if you try to change things - but you probably won't succeed. You're better off leaving and finding a better place to work.



All business logic placed within stored procedures that others can modify.

All main application development by the technical director and no-one else.


I was told not to use constants on the left hand side of an expression e.g., if("hello"==strVar) and was made to change it throughout my code. Needless to say I quit after making that change.


The alternative 'if(str=="hello")' is so much more common as to be a de facto standard, and if a shop decides to enshrine this common convention in their coding standards and enforce it, I don't see that as unreasonable. Certainly not something you should quit over.


Especially since modern compilers will warn about the problem you are trying to prevent by putting the constant on the left side.


It makes sense if you are using C. If you are using something like C#, compiler will throw an error (not warning) if you write

if(str = "hello")

See: http://stackoverflow.com/questions/655657/0-variable-or-null...


your C compiler will also throw an error on that if you're using -Wall -Werror. Which you are, right?


Heavy handed, sure... maybe even micromanaging, but not an unreasonable idea.


perl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: