Homemade Pasta

 

Why buy your pasta when you can make your own in 5 min. There is a lot of foolishness around the net with people making pasta by hand do disregard them, they envy us people in the 2100 century. Another point of debate is salt, some put salt in the dough, others just boil the pasta in salty water, ask your inner self if you really care.

 

Lets start, if you want to make food for around 4 people, you will need around 200g of pasta.

Here is the recipe

200g flour (use the cheapest), 2eggs, ¼ teaspoon of salt, ½ teaspoon of olive olie.

Put everything into the mixer, but put the flour into the bowl before the eggs.

Blend it until it starts to collect in chunks. Form the dough into 2 flat buns and we are ready for next step.

If the dough is sticky, do sprinkle them with flour, actually this is very important in every step to come, if sticky… sprinkle with flour.

Now adjust your pasta machine to the biggest interval between the rollers, run your dough through it 6 times, every time fold it, remember to flour if sticky. The result should be a fairly flat dough, silk alike.

Now turn the rollers in a notch, and run the dough though twice. Do now not fold the dough at this step. Repeat this step until you are at the thinnest option on your pasta machine.

Sprinkle your pasta with flour and you are done.

Worlds best lasagna

 
 
 
 
INGREDIENTS
1 pound sweet Italian sausage
3/4 pound lean ground beef
1/2 cup minced onion
2 cloves garlic, crushed
1 28-ounce can crushed tomatoes
2 6-ounce cans tomato sauce
1/2 cup water
2 tablespoons white sugar
1 1/2 teaspoons dried basil leaves
1/2 teaspoon fennel seeds
1 teaspoon Italian seasoning
1 tablespoon salt
1/4 teaspoon ground black pepper
4 tablespoons chopped fresh parsley
12 lasagna noodles
16 ounces ricotta cheese
1 egg
1/2 teaspoon salt
3/4 pound mozzarella cheese, sliced
3/4 cup Parmesan cheese, grated
 
COOKING DIRECTIONS
In a Dutch oven, cook sausage, ground beef, onion, and garlic over medium heat until well browned. Stir in crushed tomatoes, tomato paste, tomato sauce, and water. Season with sugar, basil, fennel seeds, Italian seasoning, 1 tablespoon salt, pepper, and 2 tablespoons parsley. Simmer, covered, for about 1 1/2 hours, stirring occasionally.
 
Bring a large pot of lightly salted water to a boil. Cook lasagna noodles in boiling water for 8 to 10 minutes. Drain noodles, and rinse with cold water. In a mixing bowl, combine ricotta cheese with egg, remaining parsley, and 1/2 teaspoon salt.
 
Preheat oven to 375 degrees F (190 degrees C).
 
To assemble, spread 1 1/2 cups of meat sauce in the bottom of a 9×13 inch baking dish. Arrange 6 noodles lengthwise over meat sauce. Spread with one half of the ricotta cheese mixture. Top with a third of mozzarella cheese slices. Spoon 1 1/2 cups meat sauce over mozzarella, and sprinkle with 1/4 cup Parmesan cheese. Repeat layers, and top with remaining mozzarella and Parmesan cheese. Cover with foil: to prevent sticking, either spray foil with cooking spray, or make sure the foil does not touch the cheese.
 
Bake in preheated oven for 25 minutes. Remove foil, and bake an additional 25 minutes. Cool for 15 minutes before serving.

Why Measure – Quality and Reputation

Bad Data Quality and Reputation 

Once an impression of bad quality has taken root among users, is it difficult to change the reputation. The belief in bad quality lives on, a long time after the situation has been corrected.

This is because users’ perception of “Data quality” is based on the errors and inconsistencies that they notice, while good functionality and good data makes a much weaker impression.

Combine this with the fact that data quality will never be 100% good, and we surely have a losing hand.

So how do we change this to a winning hand? Through continuous measurement, we provide evidence that our governance efforts are having the desired effect, and that data quality is constantly improving.

 

Guldmann,IBM

ERP systems and Data Quality Neglect.

Moving into a new ERP system is an ordeal to any organization. The amount of effort it takes to customize, and implement organizationally, is often underestimated. Apparent “must haves” is given priority, on the expense on less appealing but equally important initiatives. Examples of such is the “The initial cleansing of data”, “Functionality Gap analysis”, “User acceptance test”, “Governance” etc.

This blog entry takes interest in Data quality and the Key Performance Indicators (KPI) that expose the trends of data quality. Trends that from which we deduct if our MDM initiatives, such as governance, are having the desired effect.

Data Neglect

As Untimely, Duplicated, deceitful information and inadequate data impedes on users usage and perception of a system, is it key to avoid these into the new system. Lost confidence in the system quickly leads to an amputated scenario, totally contrary to the Master Data Intentions, with information is being stored and sought elsewhere than in a consolidated single source of the truth. Piecemeal data repairs are also going require more effort than it would have taken, had they been corrected prior to entering the system.

Growing user Confidence in the system through proof.

Establishing of user trust is key. By cutting away the cleansing and allowing data with a poor data quality, into your system, you are choosing an uphill battle to start out with. This doesn’t mean we need to wait migrating until data is “perfect”, while that is unlikely to ever happen. Choosing a good starting point is imperative for success, and by a continuing effort aiming to prove, that data is trustworthy and continuously becoming better, will strengthen the initiative.

Static testing is inadequate

The idea “static testing” where an initial report establishes the baseline on the data quality, to measure your data repairs against, might be good enough for the foundation of an initial bulk load of data. In a live system are such data worthless, while the statements they make are outdated even before the ink hits the paper. Such tests aim to prove that data meets requirements for cut over phases, such as just before moving into production. They do not reflect if the reason to bad data quality has been uprooted. Data might be in the progress of being corrupted again, but the static report shows no concerns. I do believe that static testing keeps alive the idea “repairing data as a solution” rather than fixing the root cause. I guess this can be boiled down to “Static testing will not build confidence in the long-term. MDM, Governance and Data Quality Improvements are by nature continuous efforts”

Continuously exposing quality KPI

The way we monitor that our Governance initiatives are in effect, and our rules are applied, is through constant measuring and exposing quality trends.

Here is a list of KPI’s you should monitor for. The KPIs are listed in “easiest implementable” to “most yielding” order.

  • Completeness, Are all necessary data present or missing?
  • Integrity, Are the relations between entities and attributes consistent?
  • Consistence, Are data elements consistently defined and used?
  • Accuracy, does data reflect the real world objects or a verifiable source?
  • Validity, Are all data values within the valid domains specified by the business?
  • Timeliness, is data available at the time, it is needed?

Be aware of that the value of the KPI is inversely opposite proportional with the effort it takes to implement it. As an example is checking for Completeness is much easier than implementing Timeliness, which is a much more important factor.

Take aways

I guess this article can be boiled down to “you cannot control, what you are not measuring”, combined with the advice, calling for data quality issues to be dealt with in a continuous and controlled manner, as early as possible in the project, whilst complexity is not increased by system restrains.

Modeling M:N and #MDS #MASTERDATESERVICES

Modeling many to many often poses a challenge. This blog intent is to deal with this and at the same time demonstrate how the practical implementation will be in Master Data Services.

It is widespread mistake to look at Employee as a sub-type of Person. After all, a large number of the people you want to keep track of are in fact employees. A person is not inherently an employee. He or she only becomes an employee by entering into a relationship with a company. The quality of being an employee, then, is an aspect of a relationship, not of an entity.

The reason for this misperception of modeling is to be found in converting verbal expression into a data model: while many of the nouns sets the context of an object with by its relationships to other objects.

In the example below is a Person employed by one or more Organizations, each Organization may be the employer of one or more people. This implies Employment as an intersection between organization and person. The fact that an employment is the basis of one or more position assignments throws us into an intersection where a position assignment intersects employment and position.

clip_image002[8]

Looking on the payload of the entity Employment it will it include data such as "Employment date", "Termination date" and so forth. Where the entity Person carries "Social security number" eg.

 

The Example in MDS is to follow

Modeling Maps with #MDS #MASTERDATESERVICES

Entities of data is mostly a shared resource, and utilized in or contributed to by multiple systems. For example, a company may have separate systems for managing customer interactions. A part of the data will properly be an intersection, but each of the systems is also likely to have a subset of the customer information. Establishing a common definition and mapping to the natural keys of source systems is key, Master Data Services can be an effective place to store and maintain these mappings.
The following examples aim to demonstrate how to use MDS for mapping. It is also worth noticing that the examples are generic and simplistic, and assuming the members to be on the same granularity. Furthermore is it a prerequisite that the reader is familiar with MDS and its terms known ground.

 

One to One

In MDS, create one member to represent the country. There is two attributes to indicate how the country is referred to the two other systems.

The benefits of this approach are that it is easy to map source systems to one member in MDS with no impact on usability. But the downside is that it is not possible to map multiple entries to one MDS entry.

image

 

One to Many

The 1:N scenario is created by creating “mapping” entities. One for each source system is needed. The reason for separating out in more than one mapping table is that the natural keys from a source system often is built as a composite key, with varying number of attributes.

This modeling technique features more than one Natural key in the source system for the same entity, but cannot handle the Zombie Keys scenario as well described by Thomas Kejser.

image

 

image

 

Modeling this style was in earlier releases on the expense of the user experience, while navigating between members in the entities was slow and difficult. In this version has the ease of navigating to referred entities been improved. By clicking on the icon a new is a new window opened with the referring entity, and it’s easy to maintain its values without destroying the context which is currently edited.

image

Eventough its possible to manually maintain the mapping tables, is the ideal method to maintain these though ETL processes. The reason for this is that these tables quickly becomes complex, and human errors will most likely occour.

Many to Many

The N:M scenario aims to solve relations between Source Systems and an entities in a given time range. Where relations can cease to exist and reemerge later on.

This solution features a number of special purpose entities:

image

1) An entity holding “truth” as we would like to see it.

2) An entity holding the natural key from the Source System.

3) An entity holding the mapping in time between the entity containing the “truth” as we would like to see it and the entity holding the natural key for a given source system. This Entity also holds special payload. It must tell us in which timespan the relation is valid (ValidFrom, ValidTo). It is also hand with other flags indicating what state the map is in (IsCurrent , IsDeleted) all this is explained in more detail in Thomas kejsers blog post Fixing Zombie Keys

 

This illustration shows how the mapping table explicit tells the life span of the individual keys.

image

 

This construct is repeated pr source system pr unique natual key combination. Thomas kejsers blog post An Overview of Source Key Pathologies aims to explain the nature of each key type along with Transforming Source Keys to Real Keys – Part 1: Introducing Map tables and Transforming Source Keys to Real Keys – Part 2: Using Maps To Fix Key Problems is a most interesting read on this topic.

Batching data into Master Data Services

In this post I am going to demonstrate how easy it to use batching and explicit hierarchies in MDS. We are going to source data from the old Northwind database and stage them in MDS. We are going to use the contact title as an explicit hierarchy to our companies.

Doing such a series of action create all sorts of preconditions. We cannot place a member in a hierarchy before both hierarchy and member is created.  To do this we need to control the batching sequence.

1)  First lets create a model “DemoBatching”. This model is going to hold our entity “company”.

2) Then adding the entity “company” to the model, we choose to go with explicit hierarchies while we want to utilize this structure to give us a logical grouping on contact title.

Now if you created everything correctly, you will be able to pull down menues like this under “explore”

 

Now to the data handling, this example shows how to populate the data structure through TSQL but you can just as easily use SSIS


SET NOCOUNT ON

/* Set the model, user name and entity name*/
DECLARE @ModelName NVARCHAR(50) = N'DemoBatching'
DECLARE @UserName NVARCHAR(50) = N'PLATON\jgu'
DECLARE @EntityName NVARCHAR(50) = N'Company'

DECLARE @BatchID INT = NULL
DECLARE @UserId INT
DECLARE @VersionId INT
DECLARE @BatchName NVARCHAR= @ModelName + N'_Ver_'+CONVERT(NVARCHAR,@VersionId)+N'_User_'+@UserName;

/* Get the UserID from the username */
SET @UserId = (SELECT ID FROM mdm.tblUser u WHERE u.UserName = @UserName)

/* Get the newest version number (assuming this is the version we are populating */
SET @VersionId = (SELECT MAX(ID) FROM mdm.viw_SYSTEM_SCHEMA_VERSION WHERE Model_Name = @ModelName)

/* Truncate staging tables */
TRUNCATE TABLE mdm.tblStgBatch
TRUNCATE TABLE mdm.tblStgMember
TRUNCATE TABLE mdm.tblStgMemberAttribute
TRUNCATE TABLE mdm.tblStgRelationship

/* Create a new batch, we are going to sequence the following action with this batchid */
EXECUTE [mdm].[udpStagingBatchSave]
@UserID=@UserID
,@VersionID=@VersionID
,@Name=@BatchName
,@StatusID=1
,@ReturnID=@BatchID OUTPUT

/* Create members, deliberately not setting the name */
INSERT INTO [mdm].[tblStgMember] ([UserName],[Batch_ID],[ModelName],[EntityName],[MemberType_ID],[MemberName],[MemberCode])
SELECT @UserName ,@BatchID, @ModelName, @EntityName ,1, 'To Be Created',data.CustomerID
FROM Northwind.dbo.Customers data

/* for pure cosmetic reasons, lets tell MDS how many members the batch holds */
UPDATE mdm.tblStgBatch
SET TotalMemberCount=@@ROWCOUNT
WHERE ID=@BatchID

/* Create a new batch, we are going to sequence the following action with this batchid */
EXECUTE [mdm].[udpStagingBatchSave]
@UserID=@UserID
,@VersionID=@VersionID
,@Name=@BatchName
,@StatusID=1
,@ReturnID=@BatchID OUTPUT

/* Add the name again to the member */
INSERT INTO mdm.tblStgMemberAttribute
([UserName],[Batch_ID],[ModelName],[EntityName],[MemberType_ID],[MemberCode],[AttributeName],[AttributeValue])
SELECT @UserName,@BatchID,@ModelName,@EntityName ,1,data.CustomerID,'Name',data.CompanyName
FROM Northwind.dbo.Customers data

/* for pure cosmetic reasons, lets tell MDS how many attributes the batch holds */
UPDATE mdm.tblStgBatch
SET ErrorMemberAttributeCount=@@ROWCOUNT
WHERE ID=@BatchID

/* Create a new batch, we are going to sequence the following action with this batchid */
EXECUTE [mdm].[udpStagingBatchSave]
@UserID=@UserID
,@VersionID=@VersionID
,@Name=@BatchName
,@StatusID=1
,@ReturnID=@BatchID OUTPUT

INSERT INTO [mdm].[tblStgMember] ([UserName],[Batch_ID],[HierarchyName] ,[ModelName] ,[EntityName]
,[MemberType_ID],[MemberName],[MemberCode])
SELECT @UserName ,@BatchID, 'ContactTitle',@ModelName, @EntityName
,2, data.ContactTitle,data.ContactTitle
FROM Northwind.dbo.Customers data
GROUP BY data.ContactTitle

/* for pure cosmetic reasons, lets tell MDS how many members the batch holds */
UPDATE mdm.tblStgBatch
SET TotalMemberCount=@@ROWCOUNT
WHERE ID=@BatchID

/* Create a new batch, we are going to sequence the following action with this batchid */
EXECUTE [mdm].[udpStagingBatchSave]
@UserID=@UserID
,@VersionID=@VersionID
,@Name=@BatchName
,@StatusID=1
,@ReturnID=@BatchID OUTPUT

/* Place the company within the explicit hierarchy */
INSERT INTO mdm.tblStgRelationship ([username],[Batch_ID], [ModelName], [EntityName], [HierarchyName]
, [MemberType_ID], [MemberCode], [TargetCode], [TargetType_ID])
Select @UserName,@BatchID,@ModelName, @EntityName, N'ContactTitle', 4, data.CustomerID , data.ContactTitle, 1
FROM Northwind.dbo.Customers data

/* for pure cosmetic reasons, lets tell MDS how many relations the batch holds */
UPDATE mdm.tblStgBatch
SET TotalMemberRelationshipCount=@@ROWCOUNT
WHERE ID=@BatchID

/* Tell MDS to start processing data batches */
EXEC mdm.udpStagingSweep @UserId=@UserId, @VersionId=@VersionId, @Process=1

Lets examine what MDS has done under “integration management”

And lets see the final result by browsing the ContactTitle under explore->company

This blog mainly focus on my daily life, my passion for Information Management, Photography and Food.