Here’s the HTML:

<div style='width:300px;border:1px solid green'>
  <div>Outer div</div>
  <div style='width:100%;border:1px solid red;margin:10px;'>
    Inner div, 10px margin.
  <div style='width:100%;border:1px solid red;padding:10px;'>
    Inner div, 10px padding.
  <div style='width:100%;border:1px solid red;padding:10px;box-sizing:border-box'>
    Same, with box-sizing: border-box
  <table style='width:100%;border:1px solid red;padding:10px;'>
    <tr><td>Inner table, 10px padding</td></tr>

And it looks like this in my Chrome:

Why is box-sizing acting different on table vs div?
Why is box-sizing acting different on table vs div?

I think I understand everything until the last one. My Chrome inspector shows the table’s computed box-sizing style is content-box so I expect it to behave like the second div, and overflow and look ugly. Why is it different? Is this documented somewhere in the HTML/CSS spec?

Problem courtesy of: Rob N


Yes, CSS2.1 states the following for tables with the separated borders model :

However, in HTML and XHTML1, the width of the <table> element is the distance from the left border edge to the right border edge.

Note:In CSS3 this peculiar requirement will be defined in terms of UA style sheet rules and the ‘box-sizing’ property.

The current CSS3 definition of box-sizing does not say anything about this, but translating the above quote it basically means in (X)HTML, tables use the border-box model: padding and borders do not add to the specified width of a table.

Note that in terms of the box-sizing property, different browsers seem to handle this special case differently:

  • Chrome

    box-sizing is set to the initial value, content-box ; changing it has no effect whatsoever. Neither does redeclaring box-sizing: content-box within the inline styles, but that should be expected. Either way, Chrome appears to be forcing the table to always use the border-box model.

  • IE

    box-sizing is set to border-box ; changing it to content-box causes it to behave like the second div .

  • Firefox

    -moz-box-sizing is set to border-box ; changing it to content-box or padding-box causes it to resize accordingly.

Since CSS3 does not yet make any mention of table box sizing, this should not come as a surprise. At the very least, the result is the same — it’s only the underlying implementation that’s different. But given what the note says above, I would say that IE and Firefox are closer to the intended CSS3 definition, since in Chrome you can’t seem to change a table’s box model using the box-sizing property.

Tables with the collapsing border model don’t have padding at all, although in this case it’s not relevant since your table does not use this model:

Note that in this model, the width of the table includes half the table border. Also, in this model, a table does not have padding (but does have margins).

Solution courtesy of: BoltClock


That’s how <table> <td> <th> elements works. These elements are not block level elements.

It contains padding inside the given width like the box-sizing:border-box would do on other block level elements.

FYI, I didn’t find it anywhere document.

Discussion courtesy of: Shekhar K. Sharma

This recipe can be found in it’s original form on Stack Over Flow .



Microsoft has kicked out a second preview of .NET Core 3 and naturally we fired up the IDE to see what has changed .

Aside from the improvements in C# 8.0, which require the unwary to manually select the beta language in the project’s advanced build properties in order to actually use them, there are some handy new features among the fixes.

JSON and the .NET Core-onauts

First up are the improvements in JSON performance. The arrival of Utf8JsonWriter heralds a speed bump of 30-80 per cent over JSON.NET’s writer when writing UTF-8 encoded JSON text from .NET types.

Utf8JsonReader had already made an appearance in .NET Core 3 Preview 1 , claiming a doubling of performance over the Json.NET equivalent, and while less dramatic this time around, the improvement writing the stuff is welcome.

Building on that, Utf8JsonReader in Preview 2 is JsonDocument , which takes the JSON data and shovels it into a read-only Document Object Model (DOM) which can be queried and elements enumerated. Again, Redmond reckons that parsing a JSON payload and accessing its members is two to three times the speed of Json.NET.

Your mileage may vary. While definitely snappier, I didn’t see such a dramatic jump, although obviously the type and quantity of data will have an impact and, heck, this is still Preview code.

G-P-I-Oh my

Another feature of Preview 1 that has seen improvement in this second version is support for the Raspberry Pi’s GPIO connector. IoT devs just love reading from and writing to those pins to drive the likes of LED displays or read sensor data and to that end a couple of NuGet packages have put in an appearance in the form of System.Device.Gpio and Iot.Device.Bindings with APIs for GPIO, SPI, I2C and PWM devices.

A second preview of .NET Core 3? Shucks, Microsoft. You spoil us
A second preview of .NET Core 3? Shucks, Microsoft. You spoil us

Microsoft delivers a second preview of Visual Studio 2019 (a Redmond thing we actually like)


Finally the updated preview has seen an improvement to .NET’s command line interface to list tools and their manifest, or create a manifest required by local tools. Pluggers of memory leaks will also be pleased to note the arrival of the assembly unloadability capability of AssemblyLoaderContext . An application can then load and unload an assembly "forever, without experiencing a memory leak". Strong words.

Windows, macOS and a wide variety of Linux distributions are supported by .NET Core 3, although Microsoft is quite keen that penguinistas consider using Snap if their distro supports it. Even Windows 7 remains a supported option.

As for deploying apps written with the thing? Windows users are directed toward the new app package format MSIX, although Microsoft’s Rich Lander hinted at a glorious future containing standalone executables (for the desktop at least):

"We are working on making it possible to create standalone EXEs for desktop apps. The Windows team is enable desktop apps to use WinUI/Xaml, so your standalone app will be able to use that UI stack, or WPF or Windows Forms. Your choice."

Let us hope that "standalone" does not mean "shoveling every .NET Core component into one, bloated EXE". That way lies madness and rampant storage consumption. ®



“Pause Ads” Are Coming to Hulu, Here’s What You Need to Know
“Pause Ads” Are Coming to Hulu, Here’s What You Need to Know

Late last year, it was revealed that Hulu would start showing ads when users pause a stream. While this sounds absolutely terrible, how this is going to work in reality doesn’t sound so bad.

First off, these won’t be video ads, but instead just static images on the side of the screen. That immediately makes it less obtrusive and keeps the ads out of the way. Really, it’s a smart way of doing it, because it doesn’t bother users that much and keeps the ad revenue coming for Hulu.

The second part of the equation here is who will see these ads. That answer is equally as simple: according to Engadget , any user who subscribes to Hulu’s ad-supported tier—which just got a couple dollar price drop , by the way—will be subject to this new ad format. Users who subscribe to Hulu’s ad-free tier will continue to, well, not get ads. Pretty simple.

Finally, there’s the when . These ads will start to roll out “this spring,” which is just right around the corner despite w hat weather reports look like today .

via Engadget



— Ride-hailing companiesUber andCabify are to suspend their services in Barcelona in response to the regional government’s imposition of limits on how they operate in the city.

The Catalan government ruled that ride-hailing services could only pick up passengers after a 15-minute delay from the time they were booked.

The decision followed mass protests by Barcelona taxi drivers who complained that their business was being undermined and the services did not operate on a level playing field.

Barcelona is Spain’s second-largest city and one of its main tourist destinations. The suspension of the services was announced just a few weeks before it hosts the annual Mobile World Congress, one of the world’s largest meetings of the mobile tech industry.

“The new restrictions approved by the Catalan government leave us with no choice but to suspend UberX while we assess our future in Barcelona,” an Uber spokesman said.

Spain’s Cabify, which has one million users in Barcelona, said in a statement it regretted that the city had “given in to the demands of the taxi sector, seriously hurting citizen’s interests”.

Cabify said the new regulation, which took effect on Thursday, had the specific objective of “the direct expulsion of the Cabify application” from Barcelona and the region Catalonia.

Uber said it remained committed to operating in the long-term in Spanish cities and hoped “to work with the Catalan government and the City Council on fair regulation for all”.

Uber began offering its UberX service last March. The new regulations were imposed under pressure from taxi drivers, who held strikes that blocked roads in Barcelona and remain on strike in the Spanish capital, Madrid.

The drivers in Madrid escalated their protest this week by blocking one of the city’s main arteries. But Uber licenses in Spain are granted by local authorities, and so far Madrid authorities have said they would not adopt the same restriction as in Barcelona.

Cabify and Unauto VTC, an association of transport companies in Spain, said Barcelona’s decision to adopt the new regulation could put 3,000 jobs at risk in Barcelona.

Uber declined to say how many drivers work for it in Barcelona.

(Additional reporting by Paul Day, Editing by Angus MacSwan)



It has been a busy year for data breaches already, and January isn’t even officially over. This past week has been no exception. In past seven days Airbus, Discover Financial, IT management giant Rubrik, the City of St. John in New Brunswick, Canada and the State Bank of India all reported exposures.

Discover Cards

Discover Financial has reported a “possible merchant data breach” that could have compromised user accounts to the State of California Attorney General’s office, in compliance with that state’s data breach rules. There are two separate notifications, available here
and here

“We can confirm this incident did not involve any Discover systems and we are forwarding this to the appropriate parties for review,” the company said in a media statement
issued on Twitter. “We’re aware of a possible merchant data breach & are monitoring accounts. Our members can rest assured they’re never responsible for unauthorized purchases on their Discover card accounts.”

The credit-card issuer said that it has alerted cardholders to a data breach that appears to have taken place on August 13, 2018, but it hasn’t said how much personal information was compromised or how many individuals are affected.

Anthony James, chief strategy officer at CipherCloud, told Threatpost in a prepared statement that the length of time between the breach occurring and being found is typical.

“Discover’s breach is very typical of the news we hear continually concerning financial firms and credit processors,” he said. “In today’s environment attackers will get into your networks. That’s a fait accompli. We also expect that it will take months even before a card processor such as Discover is even aware of the intrusion and possible breach What we don’t expect to hear is that the databases and credit-card data are, amazingly, unencrypted.”

Discover is mailing out new cards to those it believes are affected.

“We should be realistic – the costs for Discover will be a rounding error, and have already been built into their Q4 provisions (up 18 percent over Q4 2017),” Colin Bastable, CEO of Lucy Security, said via email. “The 176 million card-carrying U.S. consumers are generally inured to the consequences of these breaches – between them, they have some 985 million credit and store cards, and the card issuers are very good at shipping out replacement cards. The real problem is that these thefts are not victimless crimes – real money is involved. Crime rings and governments are stealing from the American consumer and using it to finance more crime.”

A Pair of Misconfigured Servers

Meanwhile, two other major data exposures revealed this week are the result of misconfigured servers, which is a scourge that shows no sign of going away

Rubrik, the IT security and cloud data management giant, exposed a whole cache of customer information, improperly stored in an Amazon Elacsticsearch database. The exposed server wasn’t protected with a password, allowing access to pretty much anyone on the internet. The company pulled the server offline Tuesday.

According to reports
, the tens of gigabytes of exposed data goes back to October, and includes customer names, contact information, contents of customer service emails, customer IT/cloud set-up and configuration information, and email signatures with names, job titles and phone numbers.

“It seems like almost every day we hear about another company that’s left an Elasticsearch server unprotected, leaving sensitive data exposed, and now we’re seeing it happen with IT vendors,” said Balaji Parimi, CEO, CloudKnox Security, via email.

“There’s a simple reason these vulnerabilities are so prevalent: the complexity of multi-cloud environments, combined with a lack of visibility into who can do what. When combined, this leads to overprivileged identities operating in environments where security team can’t answer simple questions like: ‘what privileges does each service account or employee have?’, and ‘what actions have they performed?’. These vulnerabilities are rarely malicious – they result from lack of visibility into what people are doing in extremely complex environments,” Parimi said.

In other news, the State Bank of India, the largest financial institution in that country of nearly one and a half billion people, also said this week that it failed to secure a server
with a password, leaving the financial information for millions of customers exposed as a result of “human error.” The database contained text messages, account balances, recent transactions, partial bank account numbers and customers’ phone numbers, impacting an undisclosed number of people.

CipherCloud’s James noted, “Financial institutions are under constant cyberattack. That, of course, is no surprise to any of us. Instead, the data exposure at the State Bank of India Mumbai data center isn’t due to an attacker – it is due to misconfiguration and errors in administration. Right now we are seeing a surge in data exposure and breach due to these administrative errors.”

Third-Party Supplier Credit-Card Breach

And finally, credit-card information from about 6,000 people in the Canadian city of St. John was seen being sold
on the Dark Web thanks to a payment card skimmer being installed on thethird-party parking system that it uses. The malware collected credit-card information for 18 months from those paying parking tickets before being discovered.

“Once data has been stolen, it’s used in a number of ways, including account takeover and identity fraud,” explained Ryan Wilk, vice president of Customer Success at NuData Security. “More recently, we’ve seen a change in the value of stolen data as more and more intuitions are implementing user authentication solutions that render stolen data valueless. The loss of credit card data is a worry for everyone. The data lost has the potential to be lucrative in the hands of cybercriminals, who can use the card number and CVC to accurately mimic the legitimate customer in order to make fraudulent purchases, or facilitate further cybercrime.”


Interested in learning more about privacy and data breach trends? Watch the free, on-demand  Threatpost webinar
, as editor Tom Spring examines the data breach epidemic with the help of noted breach hunter and cybersecurity expert 

Chris Vickery

. Vickery shares how companies can identify their own insecure data, remediate against a data breach and offers tips on protecting data against future attacks.



Recruitment startup Shortlist raises series A to help firms identify hidden talent
Recruitment startup Shortlist raises series A to help firms identify hidden talent


Recruitment startup Shortlist raises src=””m series A to help firms identify hidden talent

Shortlist has raised $2 million series A led by  Blue Haven Initiative , with participation from  Compass Venture Capital , existing investor  Zephyr Acorn , and several others.

According to Shortlist CEO and Co- Founder Paul Breloff, the funding will allow the firm to build out its vision for how companies in Africa and India build their teams.

“We’re building Shortlist to be a scalable way to collect signals that really matter, like demonstrated skills, interests, aspirations, work style preferences, and motivation — and use them to match the right jobseeker with the right company at the right time,” he said in a blog post.

Shortlist is used by more than 300 clients and 400,000+ jobseekers. The new raise will help it built a platform to know candidates better — their passions, personalities, and potential — and to use that data to find them jobs they love.

“We believe in a future where every team is comprised of the best-fit professionals, the job application process is human, transparent, and fair, and professional potential is unlocked across Africa and India — and beyond! We’re so grateful to have a group of dedicated investors who believe in this future too, and are joining the adventure.”

Though digital transformation is changing the world of HR and recruiting, Shortlist says this innovation is missing something essential, and (nearly) all of it is built for markets other than ours.

According to Breloff,  these days, innovation in HR champion artificial intelligence and machine learning helping companies change the lives of hiring managers by pulling information from CVs, scraping keyword data from social networks like LinkedIn, and applying natural language processing to job descriptions. However, job descriptions are often thoughtless cut-and-paste efforts, CVs are just retrospectives and miss potential, and LinkedIn keywords are self-reported and unvalidated.

READ Wants to be Africa’s Biggest Seed Bank

Shortlist is therefore finding new ways to identify hidden talent as most firm won’t be able to build the teams they need to succeed, and at the same time making it easier for youth to find steady jobs.



Here it is, supposedly: Samsung’s Galaxy S10 Plus.

The image above is apparently a render by Samsung designed for the press that was obtained by tech site 91Mobiles

If accurate, the render doesn’t add to thelong list of rumors surrounding Samsung’s Galaxy S10 Plus. If anything, it reinforces some of the existing rumors, like ultra-narrow bezels, an oval punch-hole for the selfie camera on the top right, two selfie cameras, three rear cameras, and Samsung’s switch to an in-display fingerprint sensor.

Indeed, there’s no visible fingerprint sensor on the front or back of the device pictured above, suggesting again that the Galaxy S10 Plus — at least — has an in-display fingerprint sensor. Specifically, rumors are claiming that Samsung is using ultrasonic technology for its in-display fingerprint sensor, which is supposedly better than the optical technology used in theOnePlus 6T.

Read more
: Samsung’s upcoming Galaxy S10 smartphone could introduce a completely new design with new features — here are 11 rumors about what to expect

It isn’t clear, exactly, what the extra lenses on the Galaxy S10 Plus will do. If recent phones that have three rear lenses, like the Huawei Mate 20 Pro and LG V40, are anything to go by, we can probably expect a regular lens, a zoom lens, and an ultra-wide-angle lens.

As for the two selfie cameras, we can also deduce from phones with similar selfie camera setups, like the Pixel 3, where the second selfie lens is an ultra-wide-angle camera.

Whether the leaked render is real or not, we’ll have to wait to get the confirmed details from Samsung itself. Samsung will host its Unpacked event on February 20, where we’re expecting it to unveil its latest Galaxy S smartphones.



I wrote about Cohesity Helios back in October and this week finally started to use Helios to manage my virtual cluster. Helios is a SaaS offering for managing a collection of Cohesity clusters from a central location. For today I only have a single cluster to manage so there is a simple process to add the cluster to Helios. I posted a video of the process , showing my first time using Helios and how it was very simple to get started. I talk about IT simplification a lot, this is definitely easy to operate.

Add My Cohesity Cluster to Helios
Add My Cohesity Cluster to Helios

Step one – Upgrade your cluster

The minimum supported cluster version for Helios support if 6.0.1c, I upgraded my cluster to 6.1.1a since it is the latest release available today. Refer to my earlier post and video about the upgrade process.

Add My Cohesity Cluster to Helios
Add My Cohesity Cluster to Helios

Step Two – Enable Helios access

Your updated cluster will not automatically get the ability to connect to Helios, you will need to have the Cohesity support team enable Helios access on your cluster, then you will get a Helios link in the top menu bar.

Add My Cohesity Cluster to Helios
Add My Cohesity Cluster to Helios

Step Three – Connect to Helios

Click that Helios icon and click the “Enable” button, maybe click the “View Only” button if you don’t want configuration changes to be made from Helios. I would like to use Helios as my main point of control, so I leave “View Only” disabled.

Add My Cohesity Cluster to Helios
Add My Cohesity Cluster to Helios

You will be prompted to authenticate with your Cohesity support credentials to associate this cluster with your support account, Helios uses your support account for authentication. Then it will take a few minutes for your cluster to update Helios and your cluster details to be available in Helios.

Step Four – Use Helios

Open a new tab to and logon with your support credentials, your cluster will be available and will look exactly the same as it does through the local interface on your cluster.

Add My Cohesity Cluster to Helios
Add My Cohesity Cluster to Helios

We went from on-site only management to SaaS management from anywhere in a matter of minutes.

Step Five – Repeat for each cluster

Helios does help me manage my Cohesity cluster from anywhere, but its real value will come when I have multiple clusters managed from a single place. I will be taking a look at that in a future video when I have my second lab cluster operational.

© 2019,Alastair. All rights reserved.



Amazon (AMZN) reported strong fourth-quarter results, with earnings smashing expectations and revenue also stronger than expected. But first-quarter guidance came in a bit light of estimates.

EPS came in at $6.04 on revenue of $72.4 billion, compared to expectations of $5.65 per share on $71.61 billion in revenue. This represents a 20% increase in revenue from a year ago, but it’s at the slowest pace in three years.

Shares of Amazon were fluctuating after the report.

Amazon had dubbed this holiday season the strongest in the company’s history just weeks ago, and investors were likely anticipating solid results.

Sales increased in all but one of Amazon’s five categories. Physical stores, which is primarily comprised of Whole Foods, saw a 3% decrease in revenue year-over-year.

Amazon Web Services saw 46% year-over-year growth, which is in line with previous quarters.

Percentages are a percentage of overall revenue. Source: David Foster/Yahoo Finance

“Echo Dot was the best-selling item across all products on Amazon globally, and customers purchased millions more devices from the Echo family compared to last year,” Amazon CEO Jeff Bezos said in a press release.

Prior to Amazon’s earnings release, analysts at SunTrust Robinson Humphrey suggested that the company’s own estimates, particularly in the fourth quarter, aren’t always impressive.

“History shows that Amazon has had limited success in exceeding Street expectations for revenue in 4Q, likely due to the lack of visibility at the time guidance is given, and difficulties in forecasting a high amount of sales volume in such a small window,” Suntrust analysts wrote in a note to clients.

The company has only recently exceeded analyst expectations during the fourth quarter. Prior to Q4 of 2017, the last time Amazon beat consensus estimates was in 2009, according to SunTrust.

Amazon first broke out how many customers actually pay for Prime membership inApril 2018— a whopping 100 million. According to Stifel’s estimates, more than half of U.S. households now have a Prime account.

Shares of Amazon are up around 17% over the last year. Analysts surveyed by Factset have an average price target of $2,140 on the company. Google parent Alphabet will be the last FAANG stock to report fourth-quarter results on Monday, February 4.



We are using a library provided by someone else. Of late due to changes in that library our project has 500 errors. When migrating to the new library we found that only 15 APIs are failing and 500 errors are repetitive (multiples occurrences of those 15 errors, as same calls are used at so many times).

So, for migration I proposed creating another internal static wrapper class, that wraps those library API calls. Because if library were to change again we will have less code to change and thus code becomes easier to maintain in future. And also by wrapping calls we avoid human errors (or unintended(overloaded) API usages).

But some folks here, don’t see the point in having another wrapper class, which they feel is totally
unnecessary. Their sole point of argument is that as most API changes are just one liners we can always change them using CTRL+H (Find and replace). And they also say that this extra abstraction that I am suggesting, takes away the readability (as it hides the actual API call behind another method name (even though meaningful) ) for the coder/reader.

Whats the best approach here? Am I wrong with my suggestions?

It is a relatively common practice to wrap unstable APIs and libraries with custom wrappers. One common use, for example, is to translate exceptions of that library into your nomenclature of exceptions.

More generally these wrappers are known as an Adapter, though Adapters (IMHO) are more meant for providing functionality needed by one side while hiding the exact "language" of the other side, not because the other side is unstable.

You mentioned the use of statics though – I’m generally not a big fan of using these. IMHO it is better sometimes to have an interface represent the functionality you need, and then have subtypes of that interface, when one of these subtypes uses the third party library. The advantage of that is that can one day switch to another vendor without changing every call in your system.

Either way, you’re generally on the correct track. IMHO anyone who thinks CTRL-H is a valid refactoring tool is asking for trouble. Are they at least using getters and setters (Where applicable) in their code?

Also, the readability part is unclear to me. An adapter with a readable name is just as good as an original API with a readable name.