The Daily WTF
Divine Comedy
"Code should be clear and explain what it does, comments should explain why it does that." This aphorism is a decent enough guideline, though like any guidance short enough to fit on a bumper sticker, it can easily be overapplied or misapplied.
Today, we're going to look at a comment Salagir wrote. This comment does explain what the code does, can't hope to explain why, and instead serves as a cautionary tale. We're going to take the comment in sections, because it's that long.
This is about a stored procedure in MariaDB. Think of Salagir as our Virgil, a guide showing us around the circles of hell. The first circle? A warning that the dead code will remain in the code base:
/************************** Dead code, but don't delete! What follows if the history of a terrible, terrible code. I keep it for future generations. Read it in a cold evening in front of the fireplace.My default stance is "just delete bad, dead code". But it does mean we get this story out of it, so for now I'll allow it.
**** XXX **** This is the story of the stored procedure for getext_fields. **** XXX **** Gets the english and asked language for the field, returns what it finds: it's the translation you want. Called like this: " SELECT getext('$table.$field', $key, '$lang') as $label " The function is only *in the database you work on right now*.Okay, this seems like a pretty simple function. But why does this say "the function is only in the database you work on right now"? That's concerning.
***** About syntax!! The code below can NOT be used by copy and paste in SQL admin (like phpmyadmin), due to the multiple-query that needs DELIMITER set. The code that works in phpmyadmin is this: DELIMITER $$ DROP FUNCTION IF EXISTS getext$$ CREATE FUNCTION (...same...) LIMIT 1; RETURN `txt_out`; END$$ However, DELIMITER breaks the code when executed from PHP.Am I drowning in the river Styx? Why would I be copy/pasting SQL code into PhpMyAdmin from my PHP code? Is… is this a thing people were doing? Or was it going the opposite way, and people were writing delimited statements and hoping to execute them as a single query? I'm not surprised that didn't work.
***** About configuration!!! IMPORTANT: If you have two MySQL servers bind in Replication mode in order to be able to execute this code, you (or your admin) should set: SET GLOBAL log_bin_trust_function_creators = 1; Without that, adding of this function will fail (without any error).I don't know the depths of MariaDB, so I can't comment on if this is a WTF. What leaps out to me though, is that this likely needs to be in a higher-level form of documentation, since this is a high-level configuration flag. Having it live here is a bit buried. But, this is dead code, so it's fine, I suppose.
***** About indexes!!!! The primary key was not used as index in the first version of this function. No key was used. Because the code you see here is modified for it's execution. And `field`=my_field becomes `field`= NAME_CONST('my_field',_ascii'[value]' COLLATE 'ascii_bin') And if the type of my_field in the function parameter wasn't the exact same as the type of `text`, no index is used! At first, I didn't specify the charset, and it became `field`= NAME_CONST('my_field',_utf8'[value]' COLLATE 'utf8_unicode_ci') Because utf8 is my default, and no index was used, the table `getext_fields` was read entirely each time! Be careful of your types and charsets... Also...Because the code you see here is modified for its execution. What? NAME_CONST is meant to create synthetic columns not pulled from tables, e.g. SELECT NAME_CONST("foo", "bar") would create a result set with one column ("foo"), with one row ("bar"). I guess this is fine as part of a join- but the idea that the code written in the function gets modified before execution is a skin-peelingly bad idea. And if the query is rewritten before being sent to the database, I bet that makes debugging hard.
***** About trying to debug!!!!! To see what the query becomes, there is *no simple way*. I literally looped on a SHOW PROCESSLIST to see it! Bonus: if you created the function with mysql user "root" and use it with user "SomeName", it works. But if you do the show processlist with "SomeName", you won't see it!!Ah, yes, of course. I love running queries against the database without knowing what they are, and having to use diagnostic tools in the database to hope to understand what I'm doing.
***** The final straw!!!!!! When we migrated to MariaDB, when calling this a lot, we had sometimes the procedure call stucked, and UNKILLABLE even on reboot. To fix it, we had to ENTIRELY DESTROY THE DATABASE AND CREATE IT BACK FROM THE SLAVE. Several times in the same month!!!This is the 9th circle of hell, reserved for traitors and people who mix tabs and spaces in the same file. Unkillable even on reboot? How do you even do that? I have a hunch about the database trying to retain consistency even after failures, but what the hell are they doing inside this function creation statement that can break the database that hard? The good news(?) is the comment(!) contains some of the code that was used:
**** XXX **** The creation actual code, was: **** XXX **** // What DB are we in? $PGf = $Phoenix['Getext']['fields']; $db = $PGf['sql_database']? : ( $PGf['sql_connection'][3]? : ( $sql->query2cell("SELECT DATABASE()") ) ); $func = $sql->query2assoc("SHOW FUNCTION STATUS WHERE `name`='getext' AND `db`='".$sql->e($db)."'"); if ( !count($func) ) { $sql->query(<<<MYSQL CREATE FUNCTION {$sql->gt_db}getext(my_field VARCHAR(255) charset {$ascii}, my_id INT(10) UNSIGNED, my_lang VARCHAR(6) charset {$ascii}) RETURNS TEXT DETERMINISTIC BEGIN DECLARE `txt_out` TEXT; SELECT `text` INTO `txt_out` FROM {$sql->gt_db}`getext_fields` WHERE `field`=my_field AND `id`=my_id AND `lang` IN ('en',my_lang) AND `text`!='' ORDER BY IF(`lang`=my_lang, 0, 1) LIMIT 1; RETURN `txt_out`; END; MYSQL ); ... }I hate doing string munging to generate SQL statements, but I especially hate it when the very name of the object created is dynamic. The actual query doesn't look too unreasonable, but everything about how we got here is terrifying.
**** XXX **** Today, this is not used anymore, because... **** XXX **** Because a simple sub-query perfectly works! And no maria-db bug. Thus, in the function selects() The code: //example: getext('character.name', `character_id`, 'fr') as name $sels[] = $this->sql_fields->gt_db."getext('$table.$field', $key, '$lang') as `$label`"; Is now: $sels[] = "(SELECT `text` FROM {$this->sql_fields->gt_db}`getext_fields` WHERE `field`='$table.$field' AND `lang` IN ('en', '$lang') AND `id`=$key AND `text`!='' ORDER BY IF(`lang`='$lang', 0, 1) LIMIT 1) as `$label`"; Less nice to look at, but no procedure, all the previous problems GONE! **** XXX The end. */Of course a simple subquery (or heck, probably a join!) could handle this. Linking data across two tables is what databases are extremely good at. I agree that, at the call site, this is less readable, but there are plenty of ways one could clean this up to make it more readable. Heck, with this, it looks a heck of a lot like you could have written a much simpler function.
Salagir did not provide the entirety of the code, just this comment. The comment remains in the code, as a warning sign. That said, it's a bit verbose. I think a simple "Abandon all hope, ye who enter here," would have covered it.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!CodeSOD: A Dropped Down DataSet
While I frequently have complaints about over-reliance on Object Relational Mapping tools, they do offer key benefits. For example, mapping each relation in the database to a type in your programming language at least guarantees a bit of type safety in your code. Or, you could be like Nick L's predecessor, and write VB code like this.
For i As Integer = 0 To SQLDataset.Tables(0).Rows.Count - 1 Try 'Handles DBNull Select Case SQLDataset.Tables(0).Rows(i).Item(0) Case "Bently" 'Probes Probes_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Keyphasor" Keyphasor_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Transmitter" Transmitter_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Tachometer" Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim) Case "Dial Therm" DialThermometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "DPS" DPS_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Pump Bracket" PumpBracket_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Accelerometer" Accelerometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) Case "Velometer" Velometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim) End Select Catch 'MessageBox.Show(text:="Error during SetModelNums().", _ ' caption:="Error", _ ' buttons:=MessageBoxButtons.OK, _ ' icon:=MessageBoxIcon.Error) End Try NextSo, for starters, they're using the ADO .Net DataSet object. This is specifically meant to be a disconnected, in-memory model of the database. The idea is that you might run a set of queries, store the results in a DataSet, and interact with the data entirely in memory after that point. The resulting DataSet will model all the tables and constraints you've pulled in (or allow you to define your own in memory).
One of the things that the DataSet tracks is the names of tables. So, the fact that they go and access .Table(0) is a nuisance- they could have used the name of the table. And while that might have been awfully verbose, there's nothing stopping them from doing DataTable products = SQLDataSet.Tables("Products").
None of this is what caught Nick's attention, though. You see, the DataTable in the DataSet will do its best to map database fields to .NET types. So it's the chain of calls at the end of most every field that caught Nick's eye:
SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.TrimToUpper works because the field in the database is a string field. Also, it returns a string, so there's no need to ToString it before trimming. Of course, it's the Tachometer entry that brings this to its natural absurdity:
Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim)All of this is wrapped up in an exception handler, not because of the risk of an error connecting to the database (the DataSet is disconnected after all), but because of the risk of null values, as the comment helpfully states.
We can see that once, this exception handler displayed a message box, but that has since been commented out, presumably because there are a lot of nulls and the number of message boxes the users had to click through were cumbersome. Now, the exception handler doesn't actually check what kind of exception we get, and just assumes the only thing that could happen was a null value. But that's not true- someone changed one of the tables to add a column to the front, which meant Item(1) was no longer grabbing the field the code expects, breaking the population of the Pump Bracket combo box. There was no indication that this had happened beyond users asking, "Why are there no pump brackets anymore?"
[Advertisement] Plan Your .NET 9 Migration with ConfidenceYour journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
CodeSOD: An Annual Report
Michael has the "fun" task of converting old, mainframe-driven reports into something more modern. This means reading through reams of Intelligent Query code.
Like most of these projects, no one has a precise functional definition of what it's supposed to do. The goal is to replace the system with one that behaves exactly the same, but is more "modern". This means their test cases are "run the two systems in parallel and compare the outputs; if they match, the upgrade is good."
After converting one report, the results did not match. Michael dug in, tracing through the code. The name of the report contained the word "Annual". One of the key variables which drove original the report was named TODAYS-365 (yes, you can put dashes in variables in IQ). Michael verified that the upgraded report was pulling exactly one year's worth of data. Tracing through the original report, Michael found this:
# DIVIDE ISBLCS BY ISB-COST-UOM GIVING ISB-COST-EACH. MULTIPLY ISB-STK-QOH TIMES ISB-COST-EACH GIVING ISB-ON-HAND-COST. # SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365. # SEARCH FOR ITMMAN = 'USA' AND ITMNMB <> '112-*'This snippet comes from a report which contains many hundreds of lines of code. So it's very easy to understand how someone could miss the important part of the code. Specifically, it's this line: SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365..
Subtract 426 from today's date, and store the result in a variable called TODAYS-365. This report isn't for the past year, but for the past year and about two months.
It's impossible to know exactly why, but at a guess, originally the report needed to grab a year. Then, at some point, the requirement changed, probably based on some nonsense around fiscal years or something similar. The least invasive way to make that change was to just change the calculation, leaving the variable name (and the report name) incorrect and misleading. And there it say, working perfectly fine, until poor Michael came along, trying to understand the code.
The fix was easy, but the repeated pattern of oddly name, unclear variables was not. Remember, the hard part about working on old mainframes isn't learning COBOL or IQ or JCL or whatever antique languages they use; I'd argue those languages are in many cases easier to learn (if harder to use) than modern languages. The hard part is the generations of legacy kruft that's accumulated in them. It's grandma's attic, and granny was a pack rat.
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.CodeSOD: Concatenated Validation
User inputs are frequently incorrect, which is why we validate them. So, for example, if the user is allowed to enter an "asset ID" to perform some operation on it, we should verify that the asset ID exists before actually doing the operation.
Someone working with Capybara James almost got there. Almost.
private boolean isAssetIdMatching(String requestedAssetId, String databaseAssetId) { return (requestedAssetId + "").equals(databaseAssetId + ""); }This Java code checks if the requestedAssetId, provided by the user, matches a databaseAssetId, fetched from the database. I don't fully understand how we get to this particular function. How is the databaseAssetId fetched? If the fetch were successful, how could it not match? I fear they may do this in a loop across all of the asset IDs in the database until they find a match, but I don't know that for sure, but the naming conventions hint at a WTF.
The weird thing here, though, is the choice to concatenate an empty string to every value. There's no logical reason to do this. It certainly won't change the equality check. I strongly suspect that the goal here was to protect against null values, but it doesn't work that way in Java. If the string variables are null, this will just throw an exception when you try and concatenate.
I strongly suspect the developer was more confident in JavaScript, where this pattern "works".
I don't understand why or how this function got here. I'm not the only one. James writes:
No clue what the original developers were intending with this. It sure was a shocker when we inherited a ton of code like this.
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.Error'd: Monkey Business
If monkeys aren't your bag, Manuel H. is down to clown. "If anyone wants to know the address of that circus - it's written right there. Too bad that it's only useful if you happen to be in the same local subnet..." Or on the same block.
"Where's my pension?" paniced Stewart , explaining "I logged on to check my Aegon pension to banish the Monday blues and my mood only worsened!"
After last week's episode, BJH is keeping a weather eye out for freak freezes. "The Weather.com app has something for everyone. Simple forecasts almost anyone can understand, and technical jargon for the geeks."
"It costs too much to keep a salmon on the road," complains Yitzchok Nolastname . I agree this result looks fishy.
"At Vanity Fair, brevity is the soul of wit," notes jeffphi . Always has been.
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
CodeSOD: What a CAD
In my career, several times I've ended up being the pet programmer for a team of engineers and CNC operators, which frequently meant helping them do automation in their CAD tools. At its peak complexity, it resulted in a (mostly unsuccessful) attempt to build a lens/optics simulator in RhinoCAD.
Which brings us to the code Nick L sends us. It sounds like Nick's in a similar position: engineers write VB.Net code to control their CAD tool, and then Nick tries desperately to get them to follow some sort of decent coding practice. The result is code like:
'Looping Through S_Parts that have to be inital created For Each Item As Object In RootPart.S_PartsToCreate If Item.objNamDe IsNot String.Empty Then If Item.objNamEn IsNot String.Empty Then If Item.artCat IsNot String.Empty Then If Item.prodFam IsNot String.Empty Then If Item.prodGrp IsNot String.Empty Then 'Checking if the Mandatory Properties are in the partfamilies and not empty If Item.Properties.ContainsKey("From_sDesign") Then ' I omitted 134 lines of logic that really should be their own function Else MsgBox("Property From_SDesign is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If Else MsgBox("Property prodGrp is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If Else MsgBox("Property prodFam is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If Else MsgBox("Property artCat is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If Else MsgBox("objNamEn is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If Else MsgBox("objNamDe is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC") Exit Sub End If NextAll of their code is stored in a single file called Custom.vb, and it is not stored in source control. Yes, people overwrite each other's code all the time, and it causes endless problems.
Nick writes:
I really wish we'd stop letting engineers code without supervision. Someone should at least tell them about early returns.
.comment { border: none; } [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.CodeSOD: Going on a teDa
Carlos G found some C++ that caused him psychic harm, and wanted to know how it ended up that way. So he combed through the history. Let's retrace the path with him.
Here was the original code:
void parseExpiryDate (const char* expiryDate) { // expiryDate is in "YYMM" format int year, month; sscanf(expiryDate, "%2d%2d", &year, &month); //... }This code takes a string containing an expiry date, and parses it out. The sscanf function is given a format string describing two, two digit integers, and it stores those values into the year and month variables.
But oops! The expiry date is actually in a MMYY format. How on earth could we possibly fix this? It can't be as simple as just swapping the year and month variables in the sscanf call, can it? (It is.) No, it couldn't be that easy. (It is.) I can't imagine how we would solve this problem. (Just swap them!)
void parseExpiryDate(const char* expiryDate) { // expiryDate is in "YYMM" format but, in some part of the code, it is formatted to "MMYY" int year, month; char correctFormat[5]; correctFormat[0] = expiryDate[2]; correctFormat[1] = expiryDate[3]; correctFormat[2] = expiryDate[0]; correctFormat[3] = expiryDate[1]; correctFormat[4] = '\0'; sscanf(correctFormat, "%2d%2d", &year, &month); //... }There we go! That was easy! We just go, character by character, and shift the order around and copy it to a new string, so that we format it in YYMM.
The comment here is a wonderful attempt at CYA. By the time this function is called, the input is in MMYY, so that's the relevant piece of information to have in the comment. But the developer really truly believed that YYMM was the original input, and thus shifts blame for the original version of this function to "some part of the code" which is shifting the format around on them, thus justifying… this trainwreck.
Carlos replaced it with:
void parseExpiryDate (const char* expiryDate) { // expiryDate is in "MMYY" format int month, year; sscanf(expiryDate, "%2d%2d", &month, &year); //... } .comment { border: none; } [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.CodeSOD: IsValidToken
To ensure that several services could only be invoked by trusted parties, someone at Ricardo P's employer had the brilliant idea of requiring a token along with each request. Before servicing a request, they added this check:
private bool IsValidToken(string? token) { if (string.Equals("xxxxxxxx-xxxxxx+xxxxxxx+xxxxxx-xxxxxx-xxxxxx+xxxxx", token)) return true; return false; }The token is anonymized here, but it's hard-coded into the code, because checking security tokens into source control, and having tokens that never expire has never caused anyone any trouble.
Which, in the company's defense, they did want the token to expire. The problem there is that they wanted to be able to roll out the new token to all of their services over time, which meant the system had to be able to support both the old and new token for a period of time. And you know exactly how they handled that.
private bool IsValidToken(string? token) { if (string.Equals("xxxxxxxx-xxxxxx+xxxxxxx+xxxxxx-xxxxxx-xxxxxx+xxxxx", token)) return true; else if (string.Equals("yyyyyyy-yyyyyy+yyyyy+yyyyy-yyyyy-yyyyy+yyyy", token)) return true; return false; }For a change, I'm more mad about this insecurity than the if(cond) return true pattern, but boy, I hate that pattern.
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!CodeSOD: An Exert Operation
The Standard Template Library for C++ is… interesting. A generic set of data structures and algorithms was a pretty potent idea. In practice, early implementations left a lot to be desired. Because the STL is a core part of C++ at this point, and widely used, it also means that it's slow to change, and each change needs to go through a long approval process.
Which is why the STL didn't have a ``std::map::containsfunction until the C++20 standard. There were other options. For example, one could usestd::map::count, to count how many times a key appear. Or you could use std::map::findto search for a key. One argument against adding astd::map::containsfunction is thatstd::map::count` basically does the same job and has the same performance.
None of this stopped people from adding their own. Which brings us to Gaetan's submission. Absent a std::map::contains method, someone wrote a whole slew of fieldExists methods, where field is one of many possible keys they might expect in the map.
bool DataManager::thingyExists (string name) { THINGY* l_pTHINGY = (*m_pTHINGY)[name]; if(l_pTHINGY == NULL) { m_pTHINGY->erase(name); return false; } else { return true; } return false; }I've head of upsert operations- an update and insert as the same operation, but this is the first exert- an existence check and an insert in the same operation.
"thingy" here is anonymization. The DataManager contained several of these methods, which did the same thing, but checked a different member variable. Other classes, similar to DataManager had their own implementations. In truth, the original developer did a lot of "it's a class, but everything inside of it is stored in a map, that's more flexible!"
In any case, this code starts by using the [] accessor on a member variable m_pTHINGY. This operator returns a reference to what's stored at that key, or if the key doesn't exist inserts a default-constructed instance of whatever the map contains.
What the map contains, in this case, is a pointer to a THINGY, so the default construction of a pointer would be null- and that's what they check. If the value is null, then we erase the key we just inserted and return false. Otherwise, we return true. Otherotherwise, we return false.
As a fun bonus, if someone intentionally stored a null in the map, this will think the key doesn't exist and as a side effect, remove it.
Gaetan writes:
What bugs me most is the final, useless return.
I'll be honest, what bugs me most is the Hungarian notation on local variables. But I'm long established as a Hungarian notation hater.
This code at least works, which compared to some bad C++, puts it on a pretty high level of quality. And it even has some upshots, according to Gaetan:
On the bright side: I have obtained easy performance boosts by performing that kind of cleanup lately in that particular codebase.
[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.Error'd: It's Getting Hot in Here
Or cold. It's getting hot and cold. But on average... no. It's absolutely unbelievable.
"There's been a physics breakthrough!" Mate exclaimed. "Looking at meteoblue, I should probably reconsider that hike on Monday." Yes, you should blow it off, but you won't need to.
An anonymous fryfan frets "The yellow arches app (at least in the UK) is a buggy mess, and I'm amazed it works at all when it does. Whilst I've heard of null, it would appear that they have another version of null, called ullnullf! Comments sent to their technical team over the years, including those with good reproduceable bugs, tend to go unanswered, unfortunately."
Llarry A. whipped out his wallet but baffled "I tried to pay in cash, but I wasn't sure how much."
"Github goes gonzo!" groused Gwenn Le Bihan. "Seems like Github's LLM model broke containment and error'd all over the website layout. crawling out of its grouped button." Gross.
Peter G. gripes "The text in the image really says it all." He just needs to rate his experience above 7 in order to enable the submit button.
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
CodeSOD: ConVersion Version
Mads introduces today's code sample with this line: " this was before they used git to track changes".
Note, this is not to say that they were using SVN, or Mercurial, or even Visual Source Safe. They were not using anything. How do I know?
/** * Converts HTML to PDF using HTMLDOC. * * @param printlogEntry ** @param inBytes * html. * @param outPDF * pdf. * @throws IOException * when error. * @throws ParseException */ public void fromHtmlToPdfOld(PrintlogEntry printlogEntry, byte[] inBytes, final OutputStream outPDF) throws IOException, ParseException {...} /** * Converts HTML to PDF using HTMLDOC. * * @param printlogEntry ** @param inBytes * html. * @param outPDF * pdf. * @throws IOException * when error. * @throws ParseException */ public void fromHtmlToPdfNew(PrintlogEntry printlogEntry, byte[] inBytes, final OutputStream outPDF) throws IOException, ParseException {...}Originally, the function was just called fromHtmlToPdf. Instead of updating the implementation, or using it as a wrapper to call the correct implementation, they renamed it to Old, added one named New, then let the compiler tell them where they needed to update the code to use the new implementation.
Mads adds: "And this is just one example in this code. This far, I have found 5 of these."
.comment { border: none; } [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.Representative Line: JSONception
I am on record as not particularly loving JSON as a serialization format. It's fine, and I'm certainly not going to die on any hills over it, but I think that as we stripped down the complexity of XML we threw away too much.
On the flip side, the simplicity means that it's harder to use it wrong. It's absent many footguns.
Well, one might think. But then Hootentoot ran into a problem. You see, an internal partner needed to send them a JSON document which contains a JSON document. Now, one might say, "isn't any JSON object a valid sub-document? Can't you just nest JSON inside of JSON all day? What could go wrong here?"
"value":"[{\"value\":\"1245\",\"begin_datum\":\"2025-05-19\",\"eind_datum\":null},{\"value\":\"1204\",\"begin_datum\":\"2025-05-19\",\"eind_datum\":\"2025-05-19\"}]",This. This could go wrong. They embedded JSON inside of JSON… as a string.
Hootentoot references the hottest memes of a decade and a half ago to describe this Xzibit:
Yo dawg, i heard you like JSON, so i've put some JSON in your JSON
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!CodeSOD: A Unique Way to Primary Key
"This keeps giving me a primary key violation!" complained one of Nancy's co-workers. "Screw it, I'm dropping the primary key constraint!"
That was a terrifying thing to hear someone say out loud. Nancy decided to take a look at the table before anyone did anything they'd regret.
CREATE TYPE record_enum AS ENUM('parts'); CREATE TABLE IF NOT EXISTS parts ( part_uuid VARCHAR(40) NOT NULL, record record_enum NOT NULL, ... ... ... PRIMARY KEY (part_uuid, record) );This table has a composite primary key. The first is a UUID, and the second is an enum with only one option in it- the name of the table. The latter column seems, well, useless, and certainly isn't going to make the primary key any more unique. But the UUID column should be unique. Universally unique, even.
Nancy writes:
Was the UUID not unique enough, or perhaps it was too unique?! They weren't able to explain why they had designed the table this way.
Nor were they able to explain why they kept violating the primary key constraint. It kept happening to them, for some reason until eventually it stopped happening, also for some reason.
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.The Service Library Service
Adam's organization was going through a period of rapid growth. Part of this growth was spinning up new backend services to support new functionality. The growth would have been extremely fast, except for one thing applying back pressure: for some reason, spinning up a new service meant recompiling and redeploying all the other services.
Adam didn't understand why, but it seemed like an obvious place to start poking at something for improvement. All of the services depended on a library called "ServiceLib"- though not all of them actually used the library. The library was a set of utilities for administering, detecting, and interacting with services in their environment- essentially a homegrown fabric/bus architecture.
It didn't take long, looking at the source control history, to understand why there was a rebuild after the release of every service. Each service triggered a one line change in this:
enum class Services { IniTechBase = 103, IniTechAdvanced = 99, IniTechFooServer = 102, … }Each service had a unique, numerical identifier, and this mapped them into an enumerated type.
Adam went to the tech lead, Raymond. "Hey, I've got an idea for speeding up our release process- we should stop hard coding the service IDs in ServiceLib."
Raymond looked at Adam like one might examine an over-enthusiastic lemur. "They're not hard-coded. We store them in an enum."
Eventually Raymond got promoted- for all of their heroic work on managing this rapidly expanding library of services. The new tech lead who came on was much more amenable to "not storing rapidly changing service IDs in an enum", and "not making every service depend on a library they often don't need", and "putting admin functionality in every service because they're linked to that library whether they like it or not."
Eventually, ServiceLib became its own service, and actually helped- instead of hindered- delivering new functionality.
Unfortunately, with no more highly visible heroics to deliver functionality, the entire department became a career dead end. Sure, they delivered on time and under budget consistently, but there were no rockstar developers like Raymond on the team anymore, the real up-and-comers who were pushing themselves.
[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.Error'd: Nicknamed Nil
Michael R. is back with receipts. "I have been going to Tayyabs for >20 years. In the past they only accepted cash tips. Good to see they are testing a new way now."
An anonymous murmers of Outlook 365: "I appreciate being explicit about the timezone for the appointments, but I am wondering how those \" got there. (And the calender in german should start on Monday not Sunday)"
"Only my friends call me {0}," complains Alejandro D. "But wait! I haven't logged in yet, how does DHL know my name?"
"Prices per square foot are through the roof," puns Mr. TA "In fact, I'm guessing 298 sq ft is the area of the kitchen cabinets alone." The price isn't so bad, it's the condo fees that will kill you.
TheRealSteveJudge writes "Have a look at the cheapest ticket price which is available for a ride of 45 km from Duisburg to Xanten -- Günstiger Ticketpreis in German. That's really affordable!" If you've just purchased a 298 ft^2 condo at the Ritz.
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
CodeSOD: Just a Few Updates
Misha has a co-worker who has unusual ideas about how database performance works. This co-worker, Ted, has a vague understanding that a SQL query optimizer will attempt to find the best execution path for a given query. Unfortunately, Ted has just enough knowledge to be dangerous; he believes that the job of a developer is to write SQL queries that will "trick" the optimizer into doing an even better job, somehow.
This means that Ted loves subqueries.
For example, let's say you had a table called tbl_updater, which is used to store pending changes for a batch operation that will later get applied. Each change in updater has a unique change key that identifies it. For reasons best not looked into too deeply, at some point in the lifecycle of a record in this table, the application needs to null out several key fields based on the change value.
If you or I were writing this, we might do something like this:
update tbl_updater set id = null, date = null, location = null, type = null, type_id = null where change = @changeAnd this is how you know that you and I are fools, because we didn't use a single subquery.
update tbl_updater set id = null where updater in (select updater from tbl_updater where change = @change) update tbl_updater set date = null where updater in (select updater from tbl_updater where change = @change) update tbl_updater set location = null where updater in (select updater from tbl_updater where change = @change) update tbl_updater set type = null where updater in (select updater from tbl_updater where change = @change) update tbl_updater set date = null where updater in (select updater from tbl_updater where change = @change) update tbl_updater set type_id = null where updater in (select updater from tbl_updater where change = @change)So here, Ted uses where updater in (subquery) which is certainly annoying and awkward, given that we know that change is a unique key. Maybe Ted didn't know that? Of course, one of the great powers of relational databases is that they offer data dictionaries so you can review the structure of tables before writing queries, so it's very easy to find out that the key is unique.
But that simple ignorance doesn't explain why Ted broke it out into multiple updates. If insanity is doing the same thing again and again expecting different results, what does it mean when you actually do get different results but also could have just done all this once?
Misha asked Ted why he took this approach. "It's faster," he replied. When Misha showed benchmarks that proved it emphatically wasn't faster, he just shook his head. "It's still faster this way."
Faster than what? Misha wondered.
[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.Representative Line: National Exclamations
Carlos and Claire found themselves supporting a 3rd party logistics package, called IniFreight. Like most "enterprise" software, it was expensive, unreliable, and incredibly complicated. It had also been owned by four different companies during the time Carlos had supported it, as its various owners underwent a series of acquisitions. It kept them busy, which is better than being bored.
One day, Claire asked Carlos, "In SQL, what does an exclamation point mean?"
"Like, as a negation? I don't think most SQL dialects support that."
"No, like-" and Claire showed him the query.
select * from valuation where origin_country < '!'"IniFreight, I presume?" Carlos asked.
"Yeah. I assume this means, 'where origin country isn't blank?' But why not just check for NOT NULL?"
The why was easy to answer: origin_country had a constraint which prohibited nulls. But the input field didn't do a trim, so the field did allow whitespace only strings. The ! is the first printable, non-whitespace character in ASCII (which is what their database was using, because it was built before "support wide character sets" was a common desire).
Unfortunately, this means that my micronation, which is simply spelled with the ASCII character 0x07 will never show up in their database. You might not think you're familiar with my country, but trust me- it'll ring a bell.
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!CodeSOD: Born Single
Alistair sends us a pretty big blob of code, but it's a blob which touches upon everyone's favorite design pattern: the singleton. It's a lot of Java code, so we're going to take this as chunks. Let's start with the two methods responsible for constructing the object.
The purpose of this code is to parse an XML file, and construct a mapping from a "name" field in the XML to a "batch descriptor".
/** * Instantiates a new batch manager. */ private BatchManager() { try { final XMLReader xmlReader = XMLReaderFactory.createXMLReader(); xmlReader.setContentHandler(this); xmlReader.parse(new InputSource(this.getClass().getClassLoader().getResourceAsStream("templates/" + DOCUMENT))); } catch (final Exception e) { logger.error("Error parsing Batch XML.", e); } } /* * (non-Javadoc) * * @see nz.this.is.absolute.crap.sax.XMLEntity#initChild(java.lang.String, * java.lang.String, java.lang.String, org.xml.sax.Attributes) */ @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { final BatchDescriptor batchDescriptor = new BatchDescriptor(); // put it in the map batchMap.put(attributes.getValue("name"), batchDescriptor); return batchDescriptor; }Here we see a private constructor, which is reasonable for a singleton. It creates a SAX based reader. SAX is event driven- instead of loading the whole document into a DOM, it emits an event as it encounters each new key element in the XML document. It's cumbersome to use, but far more memory efficient, and I'd hardly say this.is.absolute.crap, but whatever.
This code is perfectly reasonable. But do you know what's unreasonable? There's a lot more code, and these are the only things not marked as static. So let's keep going.
// singleton instance so that static batch map can be initialised using // xml /** The Constant singleton. */ @SuppressWarnings("unused") private static final Object singleton = new BatchManager();Wait… why is the singleton object throwing warnings about being unused? And wait a second, what is that comment saying, "so the static batch map can be initalalised"? I saw a batchMap up in the initChild method above, but it can't be…
private static Map<String, BatchDescriptor> batchMap = new HashMap<String, BatchDescriptor>();Oh. Oh no.
/** * Gets the. * * @param batchName * the batch name * * @return the batch descriptor */ public static BatchDescriptor get(String batchName) { return batchMap.get(batchName); } /** * Gets the post to selector name. * * @param batchName * the batch name * * @return the post to selector name */ public static String getPostToSelectorName(String batchName) { final BatchDescriptor batchDescriptor = batchMap.get(batchName); if (batchDescriptor == null) { return null; } return batchDescriptor.getPostTo(); }There are more methods, and I'll share the whole code at the end, but this gives us a taste. Here's what this code is actually doing.
It creates a static Map. static, in this context, means that this instance is shared across all instances of BatchManager.They also create a static instance of BatchManager inside of itself. The constructor of that instance then executes, populating that static Map. Now, when anyone invokes BatchManager.get it will use that static Map to resolve that.
This certainly works, and it offers a certain degree of cleanness in its implementation. A more conventional singleton would have the Map being owned by an instance, and it's just using the singleton convention to ensure there's only a single instance. This version's calling convention is certainly nicer than doing something like BatchManager.getInstance().get(…), but there's just something unholy about this that sticks into me.
I can't say for certain if it's because I just hate Singletons, or if it's this specific abuse of constructors and static members.
This is certainly one of the cases of misusing a singleton- it does not represent something there can be only one of, it's ensuring that an expensive computation is only allowed to be done once. There are better ways to handle that lifecycle. This approach also forces that expensive operation to happen at application startup, instead of being something flexible that can be evaluated lazily. It's not wrong to do this eagerly, but building something that can only do it eagerly is a mistake.
In any case, the full code submission follows:
package nz.this.is.absolute.crap.server.template; import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.ResourceBundle; import nz.this.is.absolute.crap.KupengaException; import nz.this.is.absolute.crap.SafeComparator; import nz.this.is.absolute.crap.sax.XMLEntity; import nz.this.is.absolute.crap.selector.Selector; import nz.this.is.absolute.crap.selector.SelectorItem; import nz.this.is.absolute.crap.server.BatchValidator; import nz.this.is.absolute.crap.server.Validatable; import nz.this.is.absolute.crap.server.ValidationException; import nz.this.is.absolute.crap.server.business.BusinessObject; import nz.this.is.absolute.crap.server.database.EntityHandler; import nz.this.is.absolute.crap.server.database.SQLEntityHandler; import org.apache.log4j.Logger; import org.xml.sax.Attributes; import org.xml.sax.ContentHandler; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import org.xml.sax.XMLReader; import org.xml.sax.helpers.XMLReaderFactory; /** * The Class BatchManager. */ public class BatchManager extends XMLEntity { private static final Logger logger = Logger.getLogger(BatchManager.class); /** The Constant DOCUMENT. */ private final static String DOCUMENT = "Batches.xml"; /** * The Class BatchDescriptor. */ public class BatchDescriptor extends XMLEntity { /** The batchSelectors. */ private final Collection<String> batchSelectors = new ArrayList<String>(); /** The dependentCollections. */ private final Collection<String> dependentCollections = new ArrayList<String>(); /** The directSelectors. */ private final Collection<String> directSelectors = new ArrayList<String>(); /** The postTo. */ private String postTo; /** The properties. */ private final Collection<String> properties = new ArrayList<String>(); /** * Gets the batch selectors iterator. * * @return the batch selectors iterator */ public Iterator<String> getBatchSelectorsIterator() { return this.batchSelectors.iterator(); } /** * Gets the dependent collections iterator. * * @return the dependent collections iterator */ public Iterator<String> getDependentCollectionsIterator() { return this.dependentCollections.iterator(); } /** * Gets the post to. * * @return the post to */ public String getPostTo() { return this.postTo; } /** * Gets the post to business object. * * @param businessObject * the business object * @param postHandler * the post handler * * @return the post to business object * * @throws ValidationException * the validation exception */ private BusinessObject getPostToBusinessObject( BusinessObject businessObject, EntityHandler postHandler) throws ValidationException { if (this.postTo == null) { return null; } final BusinessObject postToBusinessObject = businessObject .getBusinessObjectFromMap(this.postTo, postHandler); // copy properties for (final String propertyName : this.properties) { String postToPropertyName; if ("postToStatus".equals(propertyName)) { // status field on batch entity refers to the batch entity // itself // so postToStatus is used for updating the status property // of the postToBusinessObject itself postToPropertyName = "status"; } else { postToPropertyName = propertyName; } final SelectorItem destinationItem = postToBusinessObject .find(postToPropertyName); if (destinationItem != null) { final Object oldValue = destinationItem.getValue(); final Object newValue = businessObject.get(propertyName); if (SafeComparator.areDifferent(oldValue, newValue)) { destinationItem.setValue(newValue); } } } // copy direct selectors for (final String selectorName : this.directSelectors) { final SelectorItem destinationItem = postToBusinessObject .find(selectorName); if (destinationItem != null) { // get the old and new values for the selectors Selector oldSelector = (Selector) destinationItem .getValue(); Selector newSelector = (Selector) businessObject .get(selectorName); // strip them down to bare identifiers for comparison if (oldSelector != null) { oldSelector = oldSelector.getAsIdentifier(); } if (newSelector != null) { newSelector = newSelector.getAsIdentifier(); } // if they're different then update if (SafeComparator.areDifferent(oldSelector, newSelector)) { destinationItem.setValue(newSelector); } } } // copy batch selectors for (final String batchSelectorName : this.batchSelectors) { final Selector batchSelector = (Selector) businessObject .get(batchSelectorName); if (batchSelector == null) { throw new ValidationException( "\"PostTo\" selector missing."); } final BusinessObject batchObject = postHandler .find(batchSelector); if (batchObject != null) { // get the postTo selector for the batch object we depend on final BatchDescriptor batchDescriptor = batchMap .get(batchObject.getName()); if (batchDescriptor.postTo != null && postToBusinessObject .containsKey(batchDescriptor.postTo)) { final Selector realSelector = batchObject .getBusinessObjectFromMap( batchDescriptor.postTo, postHandler); postToBusinessObject.put(batchDescriptor.postTo, realSelector); } } } businessObject.put(this.postTo, postToBusinessObject); return postToBusinessObject; } /* * (non-Javadoc) * * @see * nz.this.is.absolute.crap.sax.XMLEntity#initChild(java.lang.String, * java.lang.String, java.lang.String, org.xml.sax.Attributes) */ @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { if ("Properties".equals(qName)) { return new XMLEntity() { @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { BatchDescriptor.this.properties.add(attributes .getValue("name")); return null; } }; } else if ("DirectSelectors".equals(qName)) { return new XMLEntity() { @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { BatchDescriptor.this.directSelectors.add(attributes .getValue("name")); return null; } }; } else if ("BatchSelectors".equals(qName)) { return new XMLEntity() { @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { BatchDescriptor.this.batchSelectors.add(attributes .getValue("name")); return null; } }; } else if ("PostTo".equals(qName)) { return new XMLEntity() { @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { BatchDescriptor.this.postTo = attributes .getValue("name"); return null; } }; } else if ("DependentCollections".equals(qName)) { return new XMLEntity() { @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { BatchDescriptor.this.dependentCollections .add(attributes.getValue("name")); return null; } }; } return null; } } /** The batchMap. */ private static Map<String, BatchDescriptor> batchMap = new HashMap<String, BatchDescriptor>(); /** * Gets the. * * @param batchName * the batch name * * @return the batch descriptor */ public static BatchDescriptor get(String batchName) { return batchMap.get(batchName); } /** * Gets the post to selector name. * * @param batchName * the batch name * * @return the post to selector name */ public static String getPostToSelectorName(String batchName) { final BatchDescriptor batchDescriptor = batchMap.get(batchName); if (batchDescriptor == null) { return null; } return batchDescriptor.getPostTo(); } // singleton instance so that static batch map can be initialised using // xml /** The Constant singleton. */ @SuppressWarnings("unused") private static final Object singleton = new BatchManager(); /** * Post. * * @param businessObject * the business object * * @throws Exception * the exception */ public static void post(BusinessObject businessObject) throws Exception { // validate the batch root object only - it can validate the rest if it // needs to if (businessObject instanceof Validatable) { if (!BatchValidator.validate(businessObject)) { logger.warn(String.format("Validating %s failed", businessObject.getClass().getSimpleName())); throw new ValidationException( "Batch did not validate - it was not posted"); } ((Validatable) businessObject).validator().prepareToPost(); } final SQLEntityHandler postHandler = new SQLEntityHandler(true); final Iterator<BusinessObject> batchIterator = new BatchIterator( businessObject, null, postHandler); // iterate through batch again posting each object try { while (batchIterator.hasNext()) { post(batchIterator.next(), postHandler); } postHandler.commit(); } catch (final Exception e) { logger.error("Exception occurred while posting batches", e); // something went wrong postHandler.rollback(); throw e; } return; } /** * Post. * * @param businessObject * the business object * @param postHandler * the post handler * * @throws KupengaException * the kupenga exception */ private static void post(BusinessObject businessObject, EntityHandler postHandler) throws KupengaException { if (businessObject == null) { return; } if (Boolean.TRUE.equals(businessObject.get("posted"))) { return; } final BatchDescriptor batchDescriptor = batchMap.get(businessObject .getName()); final BusinessObject postToBusinessObject = batchDescriptor .getPostToBusinessObject(businessObject, postHandler); if (postToBusinessObject != null) { postToBusinessObject.save(postHandler); } businessObject.setItemValue("posted", Boolean.TRUE); businessObject.save(postHandler); } /** * Instantiates a new batch manager. */ private BatchManager() { try { final XMLReader xmlReader = XMLReaderFactory.createXMLReader(); xmlReader.setContentHandler(this); xmlReader.parse(new InputSource(this.getClass().getClassLoader().getResourceAsStream("templates/" + DOCUMENT))); } catch (final Exception e) { logger.error("Error parsing Batch XML.", e); } } /* * (non-Javadoc) * * @see nz.this.is.absolute.crap.sax.XMLEntity#initChild(java.lang.String, * java.lang.String, java.lang.String, org.xml.sax.Attributes) */ @Override protected ContentHandler initChild(String uri, String localName, String qName, Attributes attributes) throws SAXException { final BatchDescriptor batchDescriptor = new BatchDescriptor(); // put it in the map batchMap.put(attributes.getValue("name"), batchDescriptor); return batchDescriptor; } } .comment { border: none; } [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.CodeSOD: Back Up for a Moment
James's team has a pretty complicated deployment process implemented as a series of bash scripts. The deployment is complicated, the scripts doing the deployment are complicated, and failures mid-deployment are common. That means they need to gracefully roll back, and they way they do that is by making backup copies of the modified files.
This is how they do that.
DATE=`date '+%Y%m%d'` BACKUPDIR=`dirname ${DESTINATION}`/backup if [ ! -d $BACKUPDIR ] then echo "Creating backup directory ..." mkdir -p $BACKUPDIR fi FILENAME=`basename ${DESTINATION}` BACKUPFILETYPE=${BACKUPDIR}/${FILENAME}.${DATE} BACKUPFILE=${BACKUPFILETYPE}-1 if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-2 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-3 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-4 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-5 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-6 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-7 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-8 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then BACKUPFILE=${BACKUPFILETYPE}-9 ; fi if [ -f ${BACKUPFILE} ] || [ -f ${BACKUPFILE}.gz ] ; then cat <<EOF You have already had 9 rates releases in one day. ${BACKUPFILE} already exists, do it manually !!! EOF exit 2 fiLook, I know that loops in bash can be annoying, but they're not that annoying.
This code creates a backup directory (if it doesn't already exist), and then creates a file name for the file we're about to backup, in the form OriginalName.Ymd-n.gz. It tests to see if this file exists, and if it does, it increments n by one. It does this until either it finds a file name that doesn't exist, or it hits 9, at which point it gives you a delightfully passive aggressive message:
You have already had 9 rates releases in one day. ${BACKUPFILE} already exists, do it manually !!!
Yeah, do it manually. Now, admittedly, I don't think a lot of folks want to do more than 9 releases in a given day, but there's no reason why they couldn't just keep trying until they find a good filename. Or even better, require each release to have an identifier (like the commit or build number or whatever) and then use that for the filenames.
Of course, just fixing this copy doesn't address the real WTF, because we laid out the real WTF in the first paragraph: deployment is a series of complicated bash scripts doing complicated steps that can fail all the time. I've worked in places like that, and it's always a nightmare. There are better tools! Our very own Alex has his product, of course, but there are a million ways to get your builds repeatable and reliable that don't involve BuildMaster but also don't involve fragile scripts. Please, please use one of those.
[Advertisement] Plan Your .NET 9 Migration with ConfidenceYour journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Error'd: Another One Rides the Bus
"Toledo is on Earth, Adrian must be on Venus," remarks Russell M. , explaining "This one's from weather.gov. Note that Adrian is 28 million miles away from Toledo. Being raised in Toledo, Michigan did feel like another world sometimes, but this is something else." Even Toledo itself is a good bit distant from Toledo. Definitely a long walk.
"TDSTF", reports regular Michael R. from London, well distant from Toledo OH and Toledo ES.
Also on the bus, astounded Ivan muses "It's been a long while since I've seen a computer embedded in a piece of public infrastructure (here: a bus payment terminal) literally snow crash. They are usually better at listening to Reason..."
From Warsaw, Jaroslaw time travels twice. First with this entry "Busses at the bus terminus often display time left till departure, on the front display and on the screens inside. So one day I entered the bus - front display stating "Departure in 5 minutes". Inside I saw this (upper image)... After two minutes the numbers changed to the ones on the lower image. I'm pretty sure I was not sitting there for six hours..."
And again with an entry we dug out of the way back bin while I was looking for more bus-related items. Was it a total concidence this bus bit also came from Jaroslaw? who just wanted to know "Is bus sharing virtualised that much?" I won't apologize, any kind of bus will do when we're searching hard to match a theme.
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.